principal software engineer crafting neural architectures in pure go. no
frameworks. no shortcuts. just mathematics, algorithms, and four decades of obsession with code.
40+
years coding
11+
years in go
∞
ai obsession
01 — focus areas
what i build
[ neural architectures ]
deep learning from scratch
implementing mlps, kans, esns, and transformers in pure go. backpropagation,
gradient descent, attention mechanisms... all hand-coded.
mlpkanesntransformers
[ inference systems ]
llm inference tooling
building llama.cpp wrappers, optimized inference pipelines, and custom
serving solutions. production-grade ai with go's performance.
llama.cppggufquantization
[ vector systems ]
embeddings & retrieval
custom vector databases, semantic search engines, and rag pipelines. turning
unstructured data into intelligence.
embeddingshnswrag
[ activation research ]
experimental architectures
exploring novel activation functions, learnable splines in kans, reservoir
computing dynamics. pushing the boundaries.
b-splinesgeluswish
[ high-performance go ]
systems engineering
concurrent processing, memory optimization, simd operations. making go fast
enough for neural network training.
goroutinessimdoptimization
[ model architecture ]
experimental models
designing and testing novel neural network architectures. combining
classical approaches with modern insights.
researcharchitectureexperimentation
02 — implementations
neural projects
nn—001
multi-layer perceptron
go • pure math • backpropagation
feedforward neural network with configurable layers, multiple activation
functions, and optimized matrix operations. training via gradient descent.
nn—002
kolmogorov-arnold network
go • b-splines • learnable activations
novel architecture with learnable activation functions on edges.
implementing the kolmogorov-arnold representation theorem in code.
nn—003
echo state network
go • reservoir computing • temporal patterns
reservoir computing implementation with sparse random connectivity.
efficient training through linear regression on readout weights.
nn—004
llama.go
go • llama.cpp • cgo • inference
high-performance llm inference wrapper with gguf support, context
management, and optimized memory handling for production deployments.
nn—005
vectordb
go • mmap • unsafe • zero-allocation
high-performance vector database utilizing mmap and unsafe for
zero-allocation operations. hnsw indexing with memory-mapped persistence for production-scale
similarity search.
transformer.go
1// attention is all
you need — in go
2func (t
*Transformer) Attention(q, k, v
[][]float64) [][]float64 {
3 scores := t.MatMul(q, t.Transpose(k))
4 scaled := t.Scale(scores, 1.0/math.Sqrt(t.dim))
5 weights := t.Softmax(scaled)
6
7 return
t.MatMul(weights, v)
8}
03 — evolution
the path to ai
1981
the genesis
father builds a zenith heathkit h89. the first computer enters our home.
a spark ignites in a child's mind.
1984
first lines of code
age seven. msbasic | qbasic. teaching myself in the green glow of a crt
monitor. the obsession begins.
1990s
deep systems
c. c++. diving into memory management, pointers, the metal beneath the
abstractions.
2013
go enters
discovered golang. simplicity meeting power. concurrency as a first-class
citizen. home.
2016 — 2024
enterprise scale
building high-performance apis at top fortune 50 companies. millions of
requests. zero excuses.
2024
the ai pivot
building mlp, kan, esn neural networks in pure go. no frameworks. just
mathematics and determination.
now
convergence
transformers. llm inference. vector databases. dedicated to pushing the
boundaries of ai in go.