Your objection to the second paper did not address the “real” issue, which is essentially more theoretical in nature and has to do with the fact that NNs (deep or otherwise) work only on data, and functions that work combinations of data only can, by the nature of number-theoretic functions, produce the same output for different combinations of inputs. This is a theoretical result that cannot be resolved by these models. ]]>

About hack – it’s just a question of terminology. In that sense all the mathematics is “just a hack”. But dnn work for practical applications and sometimes better then human. ]]>

I think you are correct – given a ‘normal’ image, it should be reasonably accurate. Given random data, both human neural nets and simulations are due to misclassification – the rorschach test relies on this. ]]>

To both papers, what they are saying (or rather “demonstrating”) is, basically, that neural networks are just hacks and not real solution to cv problem. It’s like we know we should have used other code for the task, but we don’t know what it should be. So we just throw whatever “works” at the problem. No surprise we end up with unexpected “bugs” later.

]]>They have different names to make difference with cuda isnan

Different implementations of isnan are here

http://stackoverflow.com/questions/2249110/how-do-i-make-a-portable-isnan-isinf-function

Most simple seems

#include <cmath>

int isnan_host(double x) {

return std::isnan(x);

}

int isinf_host(double x) {

return std::isinf(x);

}

Tell me if it help

do you have an idea of what i could do at this point? I’ve tried relocating the #define statements but it hasn’t fixed the compiling problems so far…

]]>Which method is better probably depend on the specific of the problem. Upper bound for lambda is clearly depend on the operator L (5), and it’s usually solved inexactly, with big residue, so it’s depend not only on L but on the method of solving it too. I don’t see any easy rule of choice here, only experimenting… ]]>

By the way, this sort of thing is exactly why I think every paper should mention that split Bregman is ADMM…

]]>