Go To:

Paper Title Paper Authors Table Of Contents Abstract References
Home
Report a problem with this paper

What’s Hidden in a Randomly Weighted Neural Network?

Authors

  • Vivek Ramanujan
  • Mitchell Wortsman
  • Aniruddha Kembhavi
  • Ali Farhadi
  • Mohammad Rastegari
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
  • View in Semantic Scholar

Abstract

Training a neural network is synonymous with learning the values of the weights. By contrast, we demonstrate that randomly weighted neural networks contain subnetworks which achieve impressive performance without ever training the weight values. Hidden in a randomly weighted Wide ResNet-50 is a subnetwork (with random weights) that is smaller than, but matches the performance of a ResNet-34 trained on ImageNet. Not only do these ``untrained subnetworks" exist, but we provide an algorithm to effectively find them. We empirically show that as randomly weighted neural networks with fixed weights grow wider and deeper, an ``untrained subnetwork" approaches a network with learned weights in accuracy.

1. Introduction

What lies hidden in an overparameterized neural network with random weights? If the distribution is properly scaled, then it contains a subnetwork which performs well without ever modifying the values of the weights (as illustrated by Figure 1 ).

Figure 1. If a neural network with random weights (center) is sufficiently overparameterized, it will contain a subnetwork (right) that perform as well as a trained neural network (left) with the same number of parameters.

The number of subnetworks is combinatorial in the size of the network, and modern neural networks contain millions or even billions of parameters [21] . We should expect that even a randomly weighted neural network contains a subnetwork that performs well on a given task. In this work, we provide an algorithm to find these subnetworks.

Finding subnetworks contrasts with the prevailing paradigm for neural network training -learning the values of the weights by stochastic gradient descent. Traditionally, the network structure is either fixed during training (e.g. ResNet [8] or MobileNet [9] ), or optimized in conjunction with the weight values (e.g. Neural Architecture Search (NAS)). We instead optimize to find a good subnetwork within a fixed, randomly weighted network. We do not ever tune the value of any weights in the network, not

A subnetwork ⌧ 0 of N < l

A T E X I T S H A 1 _ B A S E 6 4 = " P L J 5 Z T 5 G L W D R Z U T Q 8 0 W F Y P P U T Z 0 = " > A A A C E X I C B V C 7 T G J B F J 3 F F + I L T B S Z C E Y Q S K U J J W P J Z T C R R 8 J U Y O W W C X N M Z Z B Z 0 J A N V 2 D J R 9 H Y A I Y T N Z 1 / 4 W B B K H I S S U 7 O U F F O V S D M G F X A D B + D 3 M R Q 2 V P G F R O W T B 2 Z U 1 F C P 2 G P Y S Q M T S Y Y K J 0 Q K C I O J 0 1 N N S O D R B I U H 4 Y 0 W 9 H V 1 G / F E 6 M O 4 H D 6 N J A G R G N O I 4 Q R T L K V W P G 5 O L X P U I Y X U J M Q E / 0 G 5 A J 6 F Q H S A 2 R O Y 1 B E S H X T 7 H V L B T W D A S 4 T L Y M L K K H R K 3 7 5 F Y F N B E D J H P T Q E M 6 I G X R J T T E J K 4 J V F E K Q H Q E B 6 V R K U U X U K M 4 U M S A T Q / R H J K R 9 D R W Z + R S J R B F S 4 Z I 0 L T H S Q 7 X O T C X / V K 7 R 0 X M Q U P 4 Y T T I E F X Q Z B R W A 0 3 H G N 0 Q C N R T B G R C K D L E I H 0 G I R G 2 I B R U C T 3 J Y M M N V Q P 5 B 9 W 5 R P F P L F K C E H I F J U A E E O A N 1 C A 0 A O A K W E A T P 4 B W 8 O U / O I / P U F M X L C 0 7 W C W J + W P N 8 A W S E M 2 S = < / L A T E X I T > < L A T E X I T S H A 1 _ B A S E 6 4 = " P L J 5 Z T 5 G L W D R Z U T Q 8 0 W F Y P P U T Z 0 = " > A A A C E X I C B V C 7 T G J B F J 3 F F + I L T B S Z C E Y Q S K U J J W P J Z T C R R 8 J U Y O W W C X N M Z Z B Z 0 J A N V 2 D J R 9 H Y A I Y T N Z 1 / 4 W B B K H I S S U 7 O U F F O V S D M G F X A D B + D 3 M R Q 2 V P G F R O W T B 2 Z U 1 F C P 2 G P Y S Q M T S Y Y K J 0 Q K C I O J 0 1 N N S O D R B I U H 4 Y 0 W 9 H V 1 G / F E 6 M O 4 H D 6 N J A G R G N O I 4 Q R T L K V W P G 5 O L X P U I Y X U J M Q E / 0 G 5 A J 6 F Q H S A 2 R O Y 1 B E S H X T 7 H V L B T W D A S 4 T L Y M L K K H R K 3 7 5 F Y F N B E D J H P T Q E M 6 I G X R J T T E J K 4 J V F E K Q H Q E B 6 V R K U U X U K M 4 U M S A T Q / R H J K R 9 D R W Z + R S J R B F S 4 Z I 0 L T H S Q 7 X O T C X / V K 7 R 0 X M Q U P 4 Y T T I E F X Q Z B R W A 0 3 H G N 0 Q C N R T B G R C K D L E I H 0 G I R G 2 I B R U C T 3 J Y M M N V Q P 5 B 9 W 5 R P F P L F K C E H I F J U A E E O A N 1 C A 0 A O A K W E A T P 4 B W 8 O U / O I / P U F M X L C 0 7 W C W J + W P N 8 A W S E M 2 S = < / L A T E X I T > < L A T E X I T S H A 1 _ B A S E 6 4 = " P L J 5 Z T 5 G L W D R Z U T Q 8 0 W F Y P P U T Z 0 = " > A A A C E X I C B V C 7 T G J B F J 3 F F + I L T B S Z C E Y Q S K U J J W P J Z T C R R 8 J U Y O W W C X N M Z Z B Z 0 J A N V 2 D J R 9 H Y A I Y T N Z 1 / 4 W B B K H I S S U 7 O U F F O V S D M G F X A D B + D 3 M R Q 2 V P G F R O W T B 2 Z U 1 F C P 2 G P Y S Q M T S Y Y K J 0 Q K C I O J 0 1 N N S O D R B I U H 4 Y 0 W 9 H V 1 G / F E 6 M O 4 H D 6 N J A G R G N O I 4 Q R T L K V W P G 5 O L X P U I Y X U J M Q E / 0 G 5 A J 6 F Q H S A 2 R O Y 1 B E S H X T 7 H V L B T W D A S 4 T L Y M L K K H R

K 3 7 5 f Y F N b E d j h p T q e m 6 i g x R J T T E j k 4 J v F E k Q H q E B 6 V r K U U x U k M 4 u m s A T q / R h J K R 9 d r W Z + r s j R b F S 4 z i 0 l T H S Q 7 X o T c X / v K 7 R 0 X m Q U p 4 Y T T i e f x Q Z B r W A 0 3 h g n 0 q C N R t b g r C k d l e I h 0 g i r G 2 I B R u C t 3 j y M m n V

Randomly Initialized Neural Network N < L A T E X I T S H A 1 _ B A S E 6 4 = "

D z 8 U C R h B N S 0 5 V w K v W L 0 Q h x l u R M Y = " > A A A C H X i c b V B N S w M x E M 3 W r 1 q / V j 1 6 C V b B U 9 k t o h 4 L X j x J F V u F b i m z 2 a m G Z p M l y S q 1 9 I 9 4 8 a 9 4 8 a C I B y / i v z G t P f j 1 I O T x 3 g w z 8 + J M c G O D 4 M M r T E 3 P z M 4 V 5 0 s L i 0 v L K / 7 q W t O o X D N s M C W U v o j B o O A S G 5 Z b g R e Z R k h j g e d x 7 3 D k n 1 + j N l z J M 9 v P s J 3 C p e R d z s A 6 q e P v R l J x m a C 0 9 B R k o l L R p 1 x y y 0 H w W 0 y i q E Q l 5 h q E + + y N 0 j 2 6 d b z V 8 c t B J R i D / i X h h J T J B P W O / x Y l i u W p G 8 M E G N M K g 8 y 2 B 6 A t Z w K H p S g 3 m A H r w S W 2 H J W Q o m k P x t c N 6 b Z T E t p V 2 j 2 3 5 l j 9 3 j G A 1 J h + G r v K F O y V + e 2 N x P + 8 V m 6 7 B + 0 B l 1 l u U b K v Q d 1 c U K v o K C q a c I 3 M u k A S D k y 7 T B h l V 6 C B W R d o y Y U Q / j 7 5 L 2 l W K 2 F Q C U + q 5 d r e J I 4 i 2 S C b Z I e E Z J / U y B G p k w Z h 5 I 4 8 k C f y 7 N 1 7 j 9 6 L 9 / p V W v A m P e v

K B 7 Z 3 T W B Q O C U = < / L A T E X I T > < L A T E X I T S H A 1 _ B A S E 6 4 = "

D z 8 U C R h B N S 0 5 V w K v W L 0 Q h x l u R M Y = " > A A A C H X i c b V B N S w M x E M 3 W r 1 q / V j 1 6 C V b B U 9 k t o h 4 L X j x J F V u F b i m z 2 a m G Z p M l y S q 1 9 I 9 4 8 a 9 4 8 a C I B y / i v z G t P f j 1 I O T x 3 g w z 8 + J M c G O D 4 M M r T E 3 P z M 4 V 5 0 s L i 0 v L K / 7 q W t O o X D N s M C W U v o j B o O A S G 5 Z b g R e Z R k h j g e d x 7 3 D k n 1 + j N l z J M 9 v P s J 3 C p e R d z s A 6 q e P v R l J x m a C 0 9 B R k o l L R p 1 x y y 0 H w W 0 y i q E Q l 5 h q E + + y N 0 j 2 6 d b z V 8 c t B J R i D / i X h h J T J B P W O / x Y l i u W p G 8 M E G N M K g 8 y 2 B 6 A t Z w K H p S g 3 m A H r w S W 2 H J W Q o m k P x t c N 6 b Z T E t p V 2 j 2 3 5 l j 9 3 j G A 1 J h + G r v K F O y V + e 2 N x P + 8 V m 6 7 B + 0 B l 1 l u U b K v Q d 1 c U K v o K C q a c I 3 M u k A S D k y 7 T B h l V 6 C B W R d o y Y U Q / j 7 5 L 2 l W K 2 F Q C U + q 5 d r e J I 4 i 2 S C b Z I e E Z J / U y B G p k w Z h 5 I 4 8 k C f y 7 N 1 7 j 9 6 L 9 / p V W v A m P e v

D z 8 U C R h B N S 0 5 V w K v W L 0 Q h x l u R M Y = " > A A A C H X i c b V B N S w M x E M 3 W r 1 q / V j 1 6 C V b B U 9 k t o h 4 L X j x J F V u F b i m z 2 a m G Z p M l y S q 1 9 I 9 4 8 a 9 4 8 a C I B y / i v z G t P f j 1 I O T x 3 g w z 8 + J M c G O D 4 M M r T E 3 P z M 4 V 5 0 s L i 0 v L K / 7 q W t O o X D N s M C W U v o j B o O A S G 5 Z b g R e Z R k h j g e d x 7 3 D k n 1 + j N l z J M 9 v P s J 3 C p e R d z s A 6 q e P v R l J x m a C 0 9 B R k o l L R p 1 x y y 0 H w W 0 y i q E Q l 5 h q E + + y N 0 j 2 6 d b z V 8 c t B J R i D / i X h h J T J B P W O / x Y l i u W p G 8 M E G N M K g 8 y 2 B 6 A t Z w K H p S g 3 m A H r w S W 2 H J W Q o m k P x t c N 6 b Z T E t p V 2 j 2 3 5 l j 9 3 j G A 1 J h + G r v K F O y V + e 2 N x P + 8 V m 6 7 B + 0 B l 1 l u U b K v Q d 1 c U K v o K C q a c I 3 M u k A S D k y 7 T B h l V 6 C B W R d o y Y U Q / j 7 5 L 2 l W K 2 F Q C U + q 5 d r e J I 4 i 2 S C b Z I e E Z J / U y B G p k w Z h 5 I 4 8 k C f y 7 N 1 7 j 9 6 L 9 / p V W v A m P e v

K B 7 Z 3 T W B Q O C U = < / L A T E X I T >

A neural network ⌧ which achieves good performance Figure 1 . If a neural network with random weights (center) is sufficiently overparameterized, it will contain a subnetwork (right) that perform as well as a trained neural network (left) with the same number of parameters. even the batch norm [10] parameters or first or last layer.

< L A T E X I T S H A 1 _ B A S E 6 4 = " / F 2 E / Q G S V 4 5 O 6 3 Y Z B 4 D + + O A C J N 8 = "

> A A A C L n i c b V B N S w M x E M 3 6 W e t X 1 a O X Y B U 8 l V 0 P 6 l E R w a O C t Y V u K b P Z a T c 0 m y x J t l J K f 5 E X / 4 o e B B X x 6 s 8 w / T i o d S D k 8 d 4 M b + Z F m e D G + v 6 r N z e / s L i 0 X F g p r q 6 t b 2 y W t r b v j M o 1 w y p T Q u l 6 B A Y F l 1 i 1 3 A q s Z x o h j Q T W o u 7 F S K / 1 U B u u 5 K 3 t Z 9 h M o S N 5 m z O w j m q V L k O p u I x R W n p O J e Y a h P v s v d L d M C z u h x b y f X q f c J Z Q Y A n H H h r H d 5 S K a Y a 6 r X Q K k m G r V P Y r / r j o L A i m o E y m d d 0 q P Y e x Y n n q f J k A Y x q B n 9 n m A L T l T O C w G O Y G M 2 B d 6 G D D Q Q k p m u Z g f O 6 Q H j g m p s 7 b P b f 3 m P 0 5 M Y D U m H 4 a u c 4 U b G L + a i P y P 6 2 R 2 / Z p c 8 B l l l u U b G L U z g W 1 i o 6 y o z H X y K z o O w B M c 7 c r Z Q l o Y N Y l X H Q h B H 9 P n g V 3 R 5 X A r w Q 3 R + W z 4 2 k c B b J L 9 s g h C c g J O S N X 5 J p U C S M P 5 I m 8 k X f v 0 X v x P r z P S e u c N 5 3 Z I b / K + / o G R t K o p A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " / F 2 E / q G S V 4 5 o 6 3 y Z b 4 D + + o a c J n 8 = " > A A A C L n i c b V B N S w M x E M 3 6 W e t X 1 a O X Y B U 8 l V 0 P 6 l E R w a O C t Y V u K b P Z a T c 0 m y x J t l J K f 5 E X / 4 o e B B X x 6 s 8 w / T i o d S D k 8 d 4 M b + Z F m e D G + v 6 r N z e / s L i 0 X F g p r q 6 t b 2 y W t r b v j M o 1 w y p T Q u l 6 B A Y F l 1 i 1 3 A q s Z x o h j Q T W o u 7 F S K / 1 U B u u 5 K 3 t Z 9 h M o S N 5 m z O w j m q V L k O p u I x R W n p O J e Y a h P v s v d L d M C z u h x b y f X q f c J Z Q Y A n H H h r H d 5 S K a Y a 6 r X Q K k m G r V P Y r / r j o L A i m o E y m d d 0 q P Y e x Y n n q f J k A Y x q B n 9 n m A L T l T O C w G O Y G M 2 B d 6 G D D Q Q k p m u Z g f O 6 Q H j g m p s 7 b P b f 3 m P 0 5 M Y D U m H 4 a u c 4 U b G L + a i P y P 6 2 R 2 / Z p c 8 B l l l u U b G L U z g W 1 i o 6 y o z H X y K z o O w B M c 7 c r Z Q l o Y N Y l X H Q h B H 9 P n g V 3 R 5 X A r w Q 3 R + W z 4 2 k c B b J L 9 s g h C c g J O S N X 5 J p U C S M P 5 I m 8 k X f v 0 X v x P r z P S e u c N 5 3 Z I b / K + / o G R t K o p A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " / F 2 E / q G S V 4 5 o 6 3 y Z b 4 D + + o a c J n 8 = " > A A A C L n i c b V B N S w M x E M 3 6 W e t X 1 a O X Y B U 8 l V 0 P 6 l E R w a O C t Y V u K b P Z a T c 0 m y x J t l J K f 5 E X / 4 o e B B X x 6 s 8 w / T i o d S D k 8 d 4 M b + Z F m e D G + v 6 r N z e / s L i 0 X F g p r q 6 t b 2 y W t r b v j M o 1 w y p T Q u l 6 B A Y F l 1 i 1 3 A q s Z x o h j Q T W o u 7 F S K / 1 U B u u 5 K 3 t Z 9 h M o S N 5 m z O w j m q V L k O p u I x R W n p O J e Y a h P v s v d L d M C z u h x b y f X q f c J Z Q Y A n H H h r H d 5 S K a Y a 6 r X Q K k m G r V P Y r / r j o L A i m o E y m d d 0 q P Y e x Y n n q f J k A Y x q B n 9 n m A L T l T O C w G O Y G M 2 B d 6 G D D Q Q k p m u Z g f O 6 Q H j g m p s 7 b P b f 3 m P 0 5 M Y D U m H 4 a u c 4 U b G L + a i P y P 6 2 R 2 / Z p c 8 B l l l u U b G L U z g W 1 i o 6 y o z H X y K z o O w B M c 7 c r Z Q l o Y N Y l X H Q h B H 9 P n g V 3 R 5 X A r w Q 3 R + W z 4 2 k c B b J L 9 s g h C c g J O S N X 5 J p U C S M P 5 I m 8 k X f v 0 X v x P r z P S e u c N 5 3 Z I b / K + / o G R t K o p A = = < / l a t e x i t >

In [4] , Frankle and Carbin articulate The Lottery Ticket Hypothesis: neural networks contain sparse subnetworks that can be effectively trained from scratch when reset to their initialization. We offer a complimentary conjecture: within a sufficiently overparameterized neural network with random weights (e.g. at initialization), there exists a subnetwork that achieves competitive accuracy. Specifically, the test accuracy of the subnetwork is able to match the accuracy of a trained network with the same number of parameters.

This work is catalyzed by the recent advances of Zhou et al. [29] . By sampling subnetworks in the forward pass, they first demonstrate that subnetworks of randomly weighted neural networks can achieve impressive accuracy. However, we hypothesize that stochasticity may limit their performance. As the number of parameters in the network grows, they are likely to have a high variability in their sampled networks.

To this end we propose the edge-popup algorithm for finding effective subnetworks within randomly weighted neural networks. We show a signifigant boost in performance and scale to ImageNet. For each fixed random weight in the network, we consider a positive real-valued score. To choose a subnetwork we take the weights with the top-k% highest scores. With a gradient estimator we optimize the scores via SGD. We are therefore able to find a good neural network without ever changing the values of the weights. We empirically demonstrate the efficacy of our algorithm and formally show that under certain technical assumptions the loss decreases on the mini-batch with each modification of the subnetwork. We experiment on small and large scale datasets for image recognition, namely CIFAR-10 [12] and Imagenet [3] . On CIFAR-10 we empirically demonstrate that as networks grow wider and deeper, untrained subnetworks perform just as well as the dense network with learned weights. On ImageNet, we find a subnetwork of a randomly weighted Wide ResNet50 which is smaller than, but matches the performance of a trained ResNet-34. Moreover, a randomly weighted ResNet-101 [8] with fixed weights contains a subnetwork that is much smaller, but surpasses the performance of VGG-16 [23] . In short, we validate the unreasonable effectiveness of randomly weighted neural networks for image recognition.

2. Related Work Lottery Tickets And Supermasks

In [4] , Frankle and Carbin offer an intriguing hypothesis: neural networks contain sparse subnetworks that can be effectively trained from scratch when reset to their initialization. These so-called winning tickets have won the "initialization lottery". Frankle and Carbin find winning tickets by iteratively shrinking the size of the network, masking out weights which have the lowest magnitude at the end of each training run.

Follow up work by Zhou et al. [29] demonstrates that winning tickets achieve better than random performance without training. Motivated by this result they propose an algorithm to identify a "supermask" -a subnetwork of a randomly initialized neural network that achieves high accuracy without training. On CIFAR-10, they are able to find subnetworks of randomly initialized neural networks that achieve 65.4% accuracy.

The algorithm presented by Zhou et al. is as follows: for each weight w in the network they learn an associated probability p. On the forward pass they include weight w with probability p and otherwise zero it out. Equivalently, they use weightw = wX where X is a Bernoulli(p) random variable (X is 1 with probability p and 0 otherwise). The probabilities p are the output of a sigmoid, and are learned using stochastic gradient descent. The terminology supermask" arises as finding a subnetwork is equivalent to learn-ing a binary mask for the weights.

Our work builds upon Zhou et al., though we recognize that the stochasticity of their algorithm may limit performance. In section 3.1 we provide more intuition for this claim. We show a significant boost in performance with an algorithm that does not sample supermasks on the forward pass. For the first time we are able to match the performance of a dense network with a supermask.

Neural Architecture Search (Nas)

The advent of modern neural networks has shifted the focus from feature engineering to feature learning. However, researchers may now find themselves manually engineering the architecture of the network. Methods of Neural Architecture Search (NAS) [30, 2, 16, 24] instead provide a mechanism for learning the architecture of neural network jointly with the weights. Models powered by NAS have recently obtained state of the art classification performance on ImageNet [25] .

As highlighted by Xie et al. [27] , the connectivity patterns in methods of NAS remain largely constrained. Surprisingly, Xie et al. establish that randomly wired neural networks can achieve competitive performance. Accordingly, Wortsman et al. [26] propose a method of Discovering Neural Wirings (DNW) -where the weights and structure are jointly optimized free from the typical constraints of NAS. We highlight DNW as we use a similar method of analysis and gradient estimator to optimize our supermasks. In DNW, however, the subnetwork is chosen by taking the weights with the highest magnitude. There is therefore no way to learn supermasks with DNW as the weights and connectivity are inextricably linked -there is no way to separate the weights and the structure.

Weight Agnostic Neural Networks

In Weight Agnostic Neural Networks (WANNs) [5] , Gaier and Ha question if an architecture alone may encode the solution to a problem. They present a mechanism for building neural networks that achieve high performance when each weight in the network has the same shared value. Importantly, the performance of the network is agnostic to the value itself. They are able to obtain ∼ 92% accuracy on MNIST [15] .

We are quite inspired by WANNs, though we would like to highlight some important distinctions. Instead of each weight having the same value, we explore the setting where each weight has a random value. In Section A.2.2 of their appendix, Gaier and Ha mention that they were not successful in this setting. However, we find a good subnetwork for a given random initialization -the supermasks we find are not agnostic to the weights. Finally, Gaier Figure 2 . In the edge-popup Algorithm, we associate a score with each edge. On the forward pass we choose the top edges by score. On the backward pass we update the scores of all the edges with the straight-through estimator, allowing helpful edges that are "dead" to re-enter the subnetwork. We never update the value of any weight in the network, only the score associated with each weight.

Figure 2. In the edge-popup Algorithm, we associate a score with each edge. On the forward pass we choose the top edges by score. On the backward pass we update the scores of all the edges with the straight-through estimator, allowing helpful edges that are “dead” to re-enter the subnetwork. We never update the value of any weight in the network, only the score associated with each weight.

networks are often used as baselines in unsupervised learning [18] . Our work is different in motivation, we explicitly find untrained subnetworks which achieve high performance without changing any weight values, including the final layer.

3. Method

In this section we present our optimization method for finding effective subnetworks within randomly weighted neural networks. We begin by building intuition in an unusual setting -the infinite width limit. Next we motivate and present our algorithm for finding effective subnetworks.

3.1. Intuition The Existence Of Good Subnetworks

Modern neural networks have a staggering number of possible subnetworks. Consequently, even at initialization, a neural network should contain a subnetwork which performs well.

To build intuition we will consider an extreme case -a neural network N in the infinite width limit (for a convolutional neural networks, the width of the network is the number of channels). As in Figure 1 , let τ be a network with the same structure of N that achieves achieves good accuracy. If the weights of N are initialized using any standard scaling of a normal distribution, e.g. xavier [6] or kaiming [7] , then we may show there exists a subnetwork of N that achieves the same performance as τ without training. Let q be the probability that a given subnetwork of N has weights that are close enough to τ to obtain the same accuracy. This probability q is extremely small, but it is still nonzero. Therefore, the probability that no subnetwork of N is close enough to τ is effectively (1 − q) S where S is the number of subnetworks. S grows very quickly with the width of the network, and this probability becomes arbitrarily small.

How Should We Find A Good Subnetwork

Even if there are good subnetworks in randomly weighted neural networks, how should we find them?

Zhou et al. learn an associated probability p with each weight w in the network. On the forward pass they include weight w with probability p (where p is the output of a sigmoid) and otherwise zero it out. The infinite width limit provides intuition for a possible shortcoming of the algorithm presented by Zhou et al. [29] . Even if the parameters p are fixed, the algorithm will likely never observe the same subnetwork twice. As such, the gradient estimate becomes more unstable, and this in turn may make training difficult.

Our algorithm for finding a good subnetwork is illustrated by Figure 2 . With each weight w in the neural network we learn a positive, real valued popup score s. The subnetwork is then chosen by selecting the weights in each layer corresponding to the top-k% highest scores. For simplicity we use the same value of k for all layers.

How should we update the score s uv ? Consider a single edge in a fully connected layer which connects neuron u to neuron v. Let w uv be the weight of this edge, and s uv the associated score. If this score is initially low then w uv is not selected in the forward pass. But we would still like a way to update its score to allow it to pop back up. Informally, with backprop [22] we compute how the loss "wants" node v's input to change (i.e. the negative gradient). We then examine the weighted output of node u. If this weighted output is aligned with the negative gradient, then node u can take node v's output where the loss "wants" it to go. Accordingly, we should increase the score. If this alignment happens consistently, then the score will continue to increase and the edge will re-enter the chosen subnetwork (i.e. popup).

More formally, if w uv Z u denotes the weighted output of neuron u, and I v denotes the input of neuron v, then we update s uv as

EQUATION (1): Not extracted; please refer to original document.

This argument and the analysis that follows is motivated and guided by the work of [26] . In their work, however, they do not consider a score and are instead directly updating the weights. In the forward pass they use the top k% of edges by magnitude, and therefore there is no way of learning a subnetwork without learning the weights. Their goal is to train sparse neural networks, while we aim to showcase the efficacy of randomly weighted neural networks.

3.2. The Edge-Popup Algorithm And Analysis

We now formally detail the edge-popup algorithm. For clarity, we first describe our algorithm for a fully connected neural network. In Section B.2 we provide the straightforward extension to convolutions along with code in PyTorch [20] .

A fully connected neural network consists of layers 1, ..., L where layer has n nodes

V ( ) = {v ( ) 1 , ..., v ( ) n }.

We let I v denote the input to node v and let Z v denote the output, where Z v = σ(I v ) for some non-linear activation function σ (e.g. ReLU [13] ). The input to neuron v in layer is a weighted sum of all neurons in the preceding layer. Accordingly, we write I v as

EQUATION (2): Not extracted; please refer to original document.

where w uv are the network parameters for layer . The output of the network is taken from the final layer while the input data is given to the very first layer. Before training, the weights w uv for layer are initialized by independently sampling from distribution D . For example, if we are using kaiming normal initialization [7] with ReLU activations, then D = N 0, 2/n −1 where N denotes the normal distribution. Normally, the weights w uv are optimized via stochastic gradient descent. In our edge-popup algorithm, we instead keep the weights at their random initialization, and optimize to find a subnetwork G = (V, E). We then compute the input of node v in layer as

I v = (u,v)∈E w uv Z u (3)

where G is a sub-graph of the original fully connected network 1 . As mentioned above, for each weight w uv in the original network we learn a popup score s uv . We choose the subnetwork G by selecting the weights in each layer which have the top-k% highest scores. Equation 3 may therefore be written equivalently as

EQUATION (4): Not extracted; please refer to original document.

where h(s uv ) = 1 if s uv is among the top k% highest scores in layer and h(s uv ) = 0 otherwise. Since the gradient of h is 0 everywhere it is not possible to directly compute the gradient of the loss with respect to s uv . We instead use the straight-through gradient estimator [1] , in which h is treated as the identity in the backwards pass -the gradient goes "straight-through" h.

Consequently, we approximate the gradient to s uv aŝ

g suv = ∂L ∂I v ∂I v ∂s uv = ∂L ∂I v w uv Z u (5)

where L is the loss we are trying to minimize. The scores s uv are then updated via stochastic gradient descent with learning rate α. If we ignore momentum and weight decay [14] then we update s uv as

EQUATION (6): Not extracted; please refer to original document.

wheres uv denotes the score after the gradient step 2 .

As the scores change certain edges in the subnetwork will be replaced with others. Motivated by the analysis of [26] we show that when swapping does occur, the loss decreases for the mini-batch. Theorem 1: When edge (i, ρ) replaces (j, ρ) and the rest of the subnetwork remains fixed then the loss decreases for the mini-batch (provided the learning rate α is sufficiently small, and the loss is smooth). Proof. Lets uv denote the score of weight s uv after the gradient update. If edge (i, ρ) replaces (j, ρ) then our algorithm dictates that s iρ < s jρ buts iρ >s jρ . Accordingly,

EQUATION (7): Not extracted; please refer to original document.

which implies that

EQUATION (8): Not extracted; please refer to original document.

by the update rule given in Equation 6. LetĨ ρ denote the input to node k after the swap is made and I ρ denote the original input. Note thatĨ ρ − I ρ = w iρ Z i − w jρ Z j by Equation 3. We now wish to show that L(Ĩ ρ ) < L (I ρ ).

When the learning rate is sufficiently small (and the loss is smooth) we may assume thatĨ ρ is close to I ρ and ignore second-order terms in a Taylor expansion:

EQUATION (11): Not extracted; please refer to original document.

and from equation 8 we have that ∂L ∂Iρ (w iρ Z i −w jρ Z j ) < 0 and so L(Ĩ ρ ) < L (I ρ ) as needed.

We examine a more general case of Theorem 1 in Section B.1 of the supplementary material. . As the network becomes deeper, we are able to find subnetworks at initialization that perform as well as the dense original network when trained. The baselines are drawn as a horizontal line as we are not varying the % of weights. When we write Weights ∼ D we mean that the weights are randomly drawn from distribution D and are never tuned. Instead we find subnetworks with size (% of Weights)/100 * (Total # of Weights).

4. Experiments

We demonstrate the unreasonable effectiveness of randomly weighted neural networks image recognition on standard benchmark datasets CIFAR-10 [12] and ImageNet [3] . This section is organized as follows: in Section 4.1 we discuss the experimental setup and hyperparameters. We perform a series of ablations at small scale: we examine the effect of k, the % of Weights which remain in the subnetwork, and the effect of width. In Section 4.4 we compare against the algorithm of Zhou et al., followed by Section 4.5 in which we study the effect of the distribution used to sample the weights. We conclude with Section 4.6, where we otpimize to find subnetworks of randomly weighted neural networks which achieve good performance on ImageNet [3] .

4.1. Experimental Setup

We use two different distributions for the weights in our network:

• Kaiming Normal [7] , which we denote N k . Following the notation in section 3.2 the Kaiming Normal distribution is defined as N k = N 0, 2/n −1 where N denotes the normal distribution.

• Signed Kaiming Constant which we denote U k . Here we set each weight to be a constant and randomly choose its sign to be + or −. The constant we choose is the standard deviation of Kaiming Normal, and as a result the variance is the same. We use the notation U k as we are sampling uniformly from the set {−σ k , σ k } where σ k is the standard deviation for Kaiming Normal (i.e. 2/n −1 ).

In Section 4.5 we reflect on the importance of the random distribution and experiment with alternatives. Table 1 . For completeness we provide the architecture of the simple VGG-like [23] architectures used for CIFAR-10 [12] , which are identical to those used by Frankle and Carbin [4] and Zhou et al. [29] . However, the slightly deeper Conv8 does not appear in the previous work. Each model first performs convolutions followed by the fully connected (FC) layers, and pool denotes max-pooling.

Table 1. For completeness we provide the architecture of the simple VGG-like [23] architectures used for CIFAR-10 [12], which are identical to those used by Frankle and Carbin [4] and Zhou et al. [29]. However, the slightly deeper Conv8 does not appear in the previous work. Each model first performs convolutions followed by the fully connected (FC) layers, and pool denotes max-pooling.

On CIFAR-10 [12] we experiment with simple VGG-like architectures of varying depth. These architectures are also used by Frankle and Carbin [4] and Zhou et al. [29] and are provided in Table 1 . On ImageNet we experiment with ResNet-50 and ResNet-101 [8] , as well as their wide variants [28] . In every experiment we train for 100 epochs and report the last epoch accuracy on the validation set. When we optimize with Adam [11] we do not decay the learning rate. When we optimize with SGD we use cosine learning rate decay [17] . On CIFAR-10 [12] we train our models with weight decay 1e-4, momentum 0.9, batch size 128, and learning rate 0.1. We also often run both an Adam and SGD baseline where the weights are learned. The Adam baseline uses the same learning rate and batch size as in [4, 29] 3 . For the SGD baseline we find that training does not converge with learning rate 0.1, and so we use 0.01. As standard we also use weight decay 1e-4, momentum 0.9, and batch size 128. for training ResNet [19] . For simplicity, our edge-popup algorithm does not modify batch norm parameters, they are frozen at their default initialization in PyTorch (i.e. bias 0, scale 1). This discussion has encompassed the extent of the hyperparameter tuning for our models. We do, however, perform hyperparameter tuning for the Zhou et al. [29] baseline and improve accuracy significantly. We include further discussion of this in Section 4.4.

In all experiments on CIFAR-10 [12] we use 5 different random seeds and plot the mean accuracy ± one standard deviation. Moreover, on all figures, Learned Dense Weights denotes the standard training the full model (all weights remaining).

4.2. Varying The % Of Weights

Our algorithm has one associated parameter: the % of weights which remain in the subnetwork, which we refer to as k. Figure 3 illustrates how the accuracy of the subnetwork we find varies with k, a trend which we will now dissect. We consider k ∈ [10, 30, 50, 70, 90] and plot the dense model when it is trained as a horizontal line (as it has 100% of the weights).

Figure 3. Going Deeper: Experimenting with shallow to deep neural networks on CIFAR-10 [12]. As the network becomes deeper, we are able to find subnetworks at initialization that perform as well as the dense original network when trained. The baselines are drawn as a horizontal line as we are not varying the % of weights. When we write Weights ∼ D we mean that the weights are randomly drawn from distribution D and are never tuned. Instead we find subnetworks with size (% of Weights)/100 * (Total # of Weights).

We recieve the worst accuracy when k approaches 0 or 100. When k approaches 0, we are not able to perform well as our subnetwork has very few weights. On the other hand, when k approaches 100, our network outputs are random.

The best accuracy occurs when k ∈ [30, 70] , and we make a combinatorial argument for this trend. We are choosing kn weights out of n, and there are n kn ways of doing so. The number of possible subnetworks is therefore maximized when k ≈ 0.5, and at this value our search space is at its largest.

4.3. Varying The Width

Our intuition from Section 3.1 suggests that as the network gets wider, a subnetwork of a randomly weighted model should approach the trained model in accuracy. How wide is wide enough? In Figure 4 we vary the width of Conv4 and Conv6. The width of a linear layer is the number of "neurons", and the width of a convolution layer is the number of channels. The width multiplier is the factor by which the width of all layers is scaled. A width multiplier of 1 corresponds to the models tested in Figure 3 .

Figure 4. Going Wider: Varying the width (i.e. number of channels) of Conv4 and Conv6 for CIFAR-10 [12]. When Conv6 is wide enough, a subnetwork of the randomly weighted model (with %Weights = 50) performs just as well as the full model when it is trained.

As the width multiplier increases, the gap shrinks between the accuracy a subnetwork found with edge-popup and the dense model when it is trained. Notably, when Conv6 is wide enough, a subnetwork of the randomly weighted model (with %Weights = 50) performs just as well as the dense model when it is trained. Moreover, this boost in performance is not solely from the subnetwork having more parameters. Even when the # of parameters is fixed, increasing the width and therefore the search space leads to better performance. In Figure 5 we fix the number of parameters and while modifying k and the width multiplier. Specifically, we test k ∈ [30, 50, 70] for subnetworks of constant size c 1 , c 2 and c 3 . On Figure 5 we use |E| denote the size of the subnetwork.

Figure 5. Varying the width of Conv4 on CIFAR-10 [12] while modifying k so that the # of Parameters is fixed along each curve. c1, c2, c3 are constants which coincide with # of Parameters for k = [30, 50, 70] for width multiplier 1.

4.4. Comparing With Zhou Et Al. [29]

In Figure 6 we compare the performance of edge-popup with Zhou et al. Their work considers distributions N x and U x , which are identical to those presented in Section 4.1 but with xavier normal [6] instead of kaiming normal [7] -the factor of √ 2 is omitted from the standard deviation. By running their algorithm with N k and U k we witness a significant improvement. However, even the N x and U x results exceed those in the paper as we perform some hyperparameter tuning. As in our experiments on CIFAR-10, we use SGD with weight decay 1e-4, momentum 0.9, batch size 128, and a cosine scheduler [17] . We double the learning rate until we see their performance become worse, and settle on 200 4 .

Figure 6. Comparing the performance of edge-popup with the algorithm presented by Zhou et al. [29] on CIFAR-10 [12].

4.5. Effect Of The Distribution

The distribution that the random weights are sampled from is very important. As illustrated by Figure 7 , the per- 4 An absurdly high learning rate is required as mentioned in their work. formance of our algorithm vastly decreases when we switch to using xavier normal [6] or kaiming uniform [7] . Following the derivation in [7] , the variance of the forward pass is not exactly 1 when we consider a subnetwork with only k% of the weights. To reconcile for this we could scale standard deviation by 1/k. This distribution is referred to as "Scaled Kaiming Normal" on Figure 7 . We may also consider this scaling for the Signed Kaiming Constant distribution which is described in Section 4.1.

Figure 7. Testing different weight distributions on CIFAR-10 [12].

4.6. Imagenet [3] Experiments

On ImageNet we observe similar trends to CIFAR-10. As ImageNet is much harder, computationally feasible models are not overparameterized to the same degree. As a consequence, the performance a randomly weighted subnet- Figure 8 . Testing our Algorithm on ImageNet [3] . We use a fixed k = 30%, and find subnetworks within a randomly weighted ResNet-50 [8] , Wide ResNet-50 [28] , and ResNet-101. Notably, a randomly weighted Wide ResNet-50 contains a subnetwork which is smaller than, but matches the performance of ResNet-34. Note that for the non-dense models, # of Parameters denotes the size of the subnetwork.

Figure 8. Testing our Algorithm on ImageNet [3]. We use a fixed k = 30%, and find subnetworks within a randomly weighted ResNet-50 [8], Wide ResNet-50 [28], and ResNet-101. Notably, a randomly weighted Wide ResNet-50 contains a subnetwork which is smaller than, but matches the performance of ResNet-34. Note that for the non-dense models, # of Parameters denotes the size of the subnetwork.

work does not match the full model with learned weights. However, we still witness a very encouraging trend -the performance increases with the width and depth of the network.

As illustrated by Figure 8 , a randomly weighted Wide ResNet-50 contains a subnetwork that is smaller than, but matches the accuracy of ResNet-34 when trained on Ima-geNet [3] . As strongly suggested by our trends, better and larger "parent" networks would result in even stronger performance on ImageNet [3] . A table which reports the numbers in Figure 8 may be found in Section A of the supplementary material. Figure 9 illustrates the effect of k, which follows an almost identical trend: k ∈ [30, 70] performs best though 30 now provides the best performance. Figure 9 also demonstrates that we significantly outperform Zhou et al. at scale (they do not test on ImageNet in their paper). Their algorithm does not allow an explicit choice of the % of weights remaining in the subnetwork, and we found the algorithm unstable outside of the range reported.

Figure 9. Examining the effect of % weights on ImageNet for edge-popup and the method of Zhou et al.

The choice of the random distribution matters more for ImageNet. The "Scaled" distribution we discuss in Section 4.5 did not show any discernable difference on CIFAR-10. However, Figure 10 illustrates that on ImageNet it is much better. Recall that the "Scaled" distribution adds a factor of 1/k, which has less of an effect when k approaches 100% = 1. This result highlights the possibility of finding better distributions which works best for this algorithm.

Figure 10. Examining the effect of using the “Scaled” initialization detailed in Section 4.5 on ImageNet.

5. Conclusion

Hidden within randomly weighted neural networks we find subnetworks with compelling accuracy. This work provides an avenue for many areas of exploration. For example, we anticipate the development of faster algorithms, or the alternating optimization of the structure and the weights.

Finally, we hope that our findings serve as a useful step in the pursuit the understanding of the optimization of neural networks.

A. Table Of Imagenet Results

Table 2. ImageNet [3] classification results corresponding to Figure 8. Note that for the non-dense models, # of Parameters denotes the size of the subnetwork.

B. Additional Technical Details

In this section we first prove a more general case of Theorem 1 then provide an extension of edge-popup for convolutions along with code in PyTorch [20], found in Algorithm 1.

B.1. A More General Case Of Theorem 1

We now examine a more general case of Theorem 1, where the two swapped edges are not connected to the same node. Again we are motivated by the analysis of [26] , though we tackle a more general case. Theorem 1 (more general): When a nonzero number of edges are swapped in one layer and the rest of the network remains fixed then the loss decreases for the mini-batch (provided the learning rate α is sufficiently small, and the loss is smooth). Proof. As before, we lets uv denote the score of weight s uv after the gradient update. Additionally, letĨ v denote the input to node v after the gradient update whereas I v is the input to node v before the update. Finally, let i 1 , ..., i n denote the n nodes in layer − 1 and j 1 , ..., j m denote the m notes in layer . Our goal is to show that L Ĩ j1 , ...,Ĩ jm < L I j1 , ..., I jm (12) where the loss is written as a function of layer 's input for brevity. Since α is small and the loss is smooth we may assume that eachĨ j k is close to I j k and ignore second-order terms in a Taylor expansion:

EQUATION (14): Not extracted; please refer to original document.

= L (I j1 , ...,

EQUATION (15): Not extracted; please refer to original document.

And so, in order to show Equation 12 it suffices to show that

EQUATION (16): Not extracted; please refer to original document.

It is helpful to rewrite the sum to be over edges. Specifically, we will consider the sets E old and E new where E new contains all edges that entered the network after the gradient update and E old consists of edges which were previously in the subnetwork, but have now exited. As the total number of edges is conserved we know that |E new | = |E old |, and by assumption |E new | > 0.

Using the definition of I k andĨ k from Equation 3 we may rewrite Equation 16 as

EQUATION (17): Not extracted; please refer to original document.

which, by Equation 6 and factoring out 1/α becomes

EQUATION (18): Not extracted; please refer to original document.

We now show that

EQUATION (19): Not extracted; please refer to original document.

for any pair of edges (i a , j b ) ∈ E new and (i c , j d ) ∈ E old .

Since |E new | = |E old | > 0 we are then able to conclude that Equation 18 holds. As (i a , j b ) was not in the edge set before the gradient update, but (i c , j d ) was, we can conclude

EQUATION (20): Not extracted; please refer to original document.

Likewise, since (i a , j b ) is in the edge set after the gradient update, but (i c , j d ) isn't, we can concludẽ

EQUATION (21): Not extracted; please refer to original document.

By adding Equation 21 and Equation 20 we find that Equation 19 is satisfied as needed.

B.2. Extension To Convolutional Neural Networks

In order to show that our method extends to convolutional layers we recall that convolutions may be written in a form that resembles Equation 2. Let κ be the kernel size which we assume is odd for simplicity, then for w ∈ {1, ..., W } and h ∈ {1, ..., H} we have

EQUATION (22): Not extracted; please refer to original document.

where instead of "neurons", we now have "channels". The input I v and output Z v are now two dimensional and so Z

(w,h) v

is a scalar. As before,

Z v = σ (I v )

where σ is a nonlinear function. However, in the convolutional case σ is often batch norm [10] followed by ReLU (and then implicitly followed by zero padding). Table 2 . ImageNet [3] classification results corresponding to Figure 8 . Note that for the non-dense models, # of Parameters denotes the size of the subnetwork.

Instead of simply having weights w uv we now have weights w The update for the scores is quite similar, though we must now sum over all spatial (i.e. w and h) locations as given below:

EQUATION (24): Not extracted; please refer to original document.

In summary, we now have κ 2 edges between each u and v. The PyTorch [20] code is given by Algorithm 1, where h is GetSubnet. The gradient goes straight through h in the backward pass, and PyTorch handles the implementation of these equations. # self.k is the % of weights remaining, a real number in [0,1] # self.popup_scores is a Parameter which has the same shape as self.weight # Gradients to self.weight, self.bias have been turned off. def forward(self, x):

# Get the subnetwork by sorting the scores. adj = GetSubnet.apply( self.popup_scores.abs(), self.k) # Use only the subnetwork in the forward pass. w = self.weight * adj x = F.conv2d(

x, w, self.bias, self.stride, self.padding, self.dilation, self.groups ) return x

The original network has edgesE fc = L−1 =1 (V × V +1 )where × denotes the cross-product.

To ensure that the scores are positive we take the absolute value.

Batch size 60, learning rate 2e-4, 3e-4 and 3e-4 for Conv2, Conv4, and Conv6 respectively Conv8 is not tested in[4], though we use find that learning rate 3e-4 still performs well.