## “get nan or inf error” in cuda-convnet – possible fix

“get nan or inf” error happens sometimes on lower-end GPU’s in cuda-convnet. I have traced this error to NaN values in the weights of convolutional layers. I still not clear to me why these NaN values appear in the weights. Are they backpropagate from fully-connected layers or popping up in the convolution kernel? It looks to me latter is more likely. Anyway I made a temporary fix – just scan weight’s gradients with simple cuda kernel and replace NaN’a with zeroes. Didn’t observe the error after that.

I have pushed fix into windows version of cuda-convnet at

https://github.com/s271/win_convnet

Fix activated with option –fix-nan=1

There shouldn’t be any problem with making those changes for linux version – there are several small changes in *.cu and *.py files only

PS

If anyone wondering what cuda-convnet is here is a nice explanation:

http://fastml.com/object-recognition-in-images-with-cuda-convnet/

And here is the main paper about cuda-convnet

http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf

## December finds in #arxiv

Repost from my googleplus stream

**Computer Vision**

*Non-Local means is a local image denoising algorithm*

Paper shows that non-local mean weights are not identify patches globally point in the images, but are susceptible to aperture problem:

http://en.wikipedia.org/wiki/Optical_flow#Estimation That’s why short radius NLM could be better then large radius NLM. Small radius cutoff play the role of regularizer, similar to the Total Variation in Horn-Shunk Optical flow.

http://en.wikipedia.org/wiki/Horn%E2%80%93Schunck_method (my comment – TV-L1 is generally better than TV-L2 in Horn-Schunk)

http://arxiv.org/abs/1311.3768

**Deep Learning**

*Do Deep Nets Really Need to be Deep?*

Authors state that shallow neural nets can in fact achieve similar performance to deep convolutional nets. The problem though is, that they had to be initialized or preconditioned – they can not be trained using existing algorithms.

And for that initialization they need deep nets. Authors hypothesize that there should be algorithms that allow training of those shallow nets to reach the same performance as deep nets.

http://arxiv.org/abs/1312.6184

*Intriguing properties of neural networks*

The linear combination of deep-level nodes produce the same results as the the original nodes. That suggest that nodes the spaces itself rather it’s representation keep information for deep levels.

The input-output mapping also discontinuous – small perturbations cause misclassification. Those perturbation are not dependent on the training, only on input of classification. (My comment – sparse coding is generally not smooth on input, another argument that sparse coding is part of internal mechanics of deep learning)

http://arxiv.org/abs/1312.6199

*From Maxout to Channel-Out: Encoding Information on Sparse Pathways*

This paper start with observation that max-out is a form of sparse coding: only one of the input pathway is chosen for father processing. From this inferred development of that principle:

remove “middle” layer which “choose” maximum input, and transfer maximal input at once into next level – make choice function index-aware. Some other choice function beside the max is considered, but max still seems the best

Piecewise-constant choice function make interesting reference to previous paper (discontinuity of input-output mapping)

http://arxiv.org/abs/1312.1909

*Unsupervised Feature Learning by Deep Sparse Coding*

This, for a difference is not about convolutional network.

Instead SIFT(or similar) descriptors are used to produce bag-of-words, sparse coding is used with max-out, and manifold learning applied to it. (http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction)

http://arxiv.org/abs/1312.5783

*Generative NeuroEvolution for Deep Learning*

I’m generally wary of evolutionary methods, but this looks kind of interesting – it’s based on *compositional pattern producing network* (CPPN)- encoding geometric pattern as composition of simple functions.

This CPPN is used to encode connectivity pattern of ANN (Convolutional newtwork most used). Thus complete process is the combination of ANN training and evolutionary CPPN training

http://arxiv.org/abs/1312.5355

*Some Improvements on Deep Convolutional Neural Network Based Image Classification*, *Unsupervised feature learning by augmenting single images*

Botht papers seems about the same subject – squeeze more out of labeled images by applying a lot of transformation to them(Some of those transformations are implemented in cuda-convnet BTW)

http://arxiv.org/abs/1312.5402, http://arxiv.org/abs/1312.5242

*Exact solutions to the nonlinear dynamics of learning in deep linear neural networks*

Analytical exploration of toy 3-layer model *without_ actual non-linear neurons. Model completely linear to input (polynomial to weights). Nevertheless it show some interesting properties, like step in learning curve

http://arxiv.org/abs/1312.6120

**Optimization**

*Distributed Interior-point Method for Loosely Coupled Problems*

Mixing together all my favorite methods: Interior point, Newton, ADMM(Split-Bregman) into one algorithm and make a distribute implementation of it.

Mixing Newton and ADMM, ADMM and Interior point looks risky to me, though with a lot of subiterations it may work(that’s why it’s distributed – require a lot of calculating power)

Also I’m not sure about convergene of the combined algorithm – each step’s convergence is proven, but I’m not sure the same could be applyed to the combination.

Newton and ADMM have kind of contradicting optimal conditions – smoothness vs piecewise linearity. Would like to see more research on this…

http://arxiv.org/abs/1312.5440

*Total variation regularization for manifold-valued data*

Proximal mapping and soft thresholding for manifolds – analog of ADMM for manifolds.

http://arxiv.org/abs/1312.7710

**just interesting stuff**

*Coping with Physical Attacks on Random Network Structures*

Include finding vulnerable spots and results of random attacks

(My comment – shouldn’t it be connected to precolation theory?)

http://arxiv.org/abs/1312.6189