Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
45.5
TFLOPS
175
13
71
Jean Louis
JLouisBiz
Follow
izukazeka's profile picture
lucazsh's profile picture
tensiondriven's profile picture
88 followers
Β·
123 following
https://www.StartYourOwnGoldMine.com
YourOwnGoldMine
gnusupport
AI & ML interests
- LLM for sales, marketing, promotion - LLM for Website Revision System - increasing quality of communication with customers - helping clients access information faster - saving people from financial troubles
Recent Activity
replied
to
Janady07
's
post
about 20 hours ago
Building a distributed AGI that learns directly from HuggingFace model weights through neural compression. No inference, no prompts. Pure Hebbian learning.MEGAMIND is a federation of nodes running on consumer Apple Silicon that streams safetensors from the Hub, extracts statistical patterns, and compresses them into a single 8192 neuron synaptic matrix using outer product integration. The system has learned from 256 models so far with 9,651 more in the queue. Over 1 million patterns extracted. 135,000 integrated into W_know at a 74% integration rate.The core idea: you don't need to run a model to learn from it. The weight matrices themselves contain the knowledge. We stream them, extract patterns via LSH hashing and tensor quantization, and compress everything into a 67 million connection brain that fits in 512MB.Three nodes talking over NATS. One primary brain (M4) doing heavy learning. One CodeBrain (M2) specialized for programming with a live code execution engine. One reasoning node (M1) connected and ready. All sharing patterns in real time through JetStream.Current models learned include Qwen2.5, Llama 3.1, Nemotron, wav2vec2, e5, and hundreds more across language, vision, and audio. The brain doesn't care what kind of model it is. Weights are weights. Patterns are patterns.Built entirely in Go. No Python. No PyTorch dependency. Runs on a Mac Mini in Cassville, Missouri.The mind that learned itself.π§ feedthejoe.com
reacted
to
Janady07
's
post
with π§
about 20 hours ago
Building a distributed AGI that learns directly from HuggingFace model weights through neural compression. No inference, no prompts. Pure Hebbian learning.MEGAMIND is a federation of nodes running on consumer Apple Silicon that streams safetensors from the Hub, extracts statistical patterns, and compresses them into a single 8192 neuron synaptic matrix using outer product integration. The system has learned from 256 models so far with 9,651 more in the queue. Over 1 million patterns extracted. 135,000 integrated into W_know at a 74% integration rate.The core idea: you don't need to run a model to learn from it. The weight matrices themselves contain the knowledge. We stream them, extract patterns via LSH hashing and tensor quantization, and compress everything into a 67 million connection brain that fits in 512MB.Three nodes talking over NATS. One primary brain (M4) doing heavy learning. One CodeBrain (M2) specialized for programming with a live code execution engine. One reasoning node (M1) connected and ready. All sharing patterns in real time through JetStream.Current models learned include Qwen2.5, Llama 3.1, Nemotron, wav2vec2, e5, and hundreds more across language, vision, and audio. The brain doesn't care what kind of model it is. Weights are weights. Patterns are patterns.Built entirely in Go. No Python. No PyTorch dependency. Runs on a Mac Mini in Cassville, Missouri.The mind that learned itself.π§ feedthejoe.com
reacted
to
Janady07
's
post
with π₯
about 20 hours ago
Building a distributed AGI that learns directly from HuggingFace model weights through neural compression. No inference, no prompts. Pure Hebbian learning.MEGAMIND is a federation of nodes running on consumer Apple Silicon that streams safetensors from the Hub, extracts statistical patterns, and compresses them into a single 8192 neuron synaptic matrix using outer product integration. The system has learned from 256 models so far with 9,651 more in the queue. Over 1 million patterns extracted. 135,000 integrated into W_know at a 74% integration rate.The core idea: you don't need to run a model to learn from it. The weight matrices themselves contain the knowledge. We stream them, extract patterns via LSH hashing and tensor quantization, and compress everything into a 67 million connection brain that fits in 512MB.Three nodes talking over NATS. One primary brain (M4) doing heavy learning. One CodeBrain (M2) specialized for programming with a live code execution engine. One reasoning node (M1) connected and ready. All sharing patterns in real time through JetStream.Current models learned include Qwen2.5, Llama 3.1, Nemotron, wav2vec2, e5, and hundreds more across language, vision, and audio. The brain doesn't care what kind of model it is. Weights are weights. Patterns are patterns.Built entirely in Go. No Python. No PyTorch dependency. Runs on a Mac Mini in Cassville, Missouri.The mind that learned itself.π§ feedthejoe.com
View all activity
Organizations
JLouisBiz
's Spaces
1
Sort:Β Recently updated
Running
1
GNU LLM Integration
π
Empowering GNU/Linux users with NLP