'neural networks' 검색 분석 결과
분석 대시보드
요약 통계
검색 결과 스레드 (점수 순 정렬)
콘텐츠를 보려면 로그인이 필요합니다
로그인하고 원하는 스레드 정보를 분석해보세요.
Graphical representation of Neural Networks (Notes) ⤵️
Visualization of what is inside of AI models. This represents the layers of interconnected neural networks. And yes patterns do develop and they can form a signature of how they think. The pattern can be seen as the thought process.
Yann LeCun, Chief AI Scientist at Meta, called LLMs (like ChatGPT) merely creating the illusion of intelligence through fluent language manipulation, but lacking the depth of human understanding or reasoning. He asserted that this is a recurring pattern in the history of AI since the 1950s: each generation of AI technology (including the perceptrons of the 1960s and the neural networks of the 1980s) has been hyped as a step towards AGI, but has consistently failed to deliver on its promises.
Want to know why neural networks will never replace real programmers? The answer is in the picture below: because of optimization. AI has no imagination, and it doesn't seek elegant solutions. It just rushes ahead and cuts corners where it shouldn't. The problem is that today even real programmers don't know how to optimize the code (or just don't want to): you need no optimization if you can buy more powerful hardware.
The single most undervalued fact of mathematics: mathematical expressions are graphs, and graphs are matrices. Viewing neural networks as graphs is the idea that led to their success.
People who say things like, “ai can never do..” don’t understand they are arguing against mathematical fact. The universal approximation theorem for neural networks shows this. Let me break down what that actually means for those who don’t understand. Theorem in math means proved fact. Approximation basically means “exact”.
I’ve read several books on machine learning and how to code neural networks. I’ve learned the nodes and wires structure of comfy UI. I’ve learned how to run AI models locally, how to train LORAs on my own unique data sets. I’ve deliberately allowed AI to hallucinate on art and video I created. I’ve been on top of generative methods since the days of running GANS out of google code books. I have borderline professional knowledge of ML methods. Even with all that time invested I’m against AI art
NeurIPS 2025 just gave Best Paper to something that sounds obvious in hindsight. They scaled neural networks in reinforcement learning to 1024 layers. Got 2x to 50x better performance. Meanwhile, most RL papers use 2-5 layers. Have we been doing this wrong for years?
Dear algo, please connect me with computational neuroscience researchers at NeurIPS currently.