Curricula Vitae

George Flint Resume

George Flint

georgeflint@berkeley.edu | georgeflint.com

Education

University of California, Berkeley

Bachelor of Arts in Cognitive Science, GPA: 3.9 / 4.0

Berkeley, CA, Class of 2026

Relevant Coursework: Advanced Syntax*, Compositionality*, Computational Models of Cognition, Linear Algebra, Signals, Systems and Inference

* = graduate coursework, † = MIT OpenCourseWare

Publications

Aalok Sathe, George Flint, Evelina Fedorenko*, Noga Zaslavsky*. Language use is only sparsely compositional: The case of English adjective-noun phrases in humans and large language models Cognitive Science (Proceedings). [Link]

George Flint, Anna Ivanova. Testing a Distributional Semantics Account of Grammatical Gender Effects on Semantic Gender Perception Cognitive Science (Proceedings). [Link]

Dominic Domingo, Aryan Bandi, Arya Kunisetty, Ahan Banerjee, George Flint*, Kevin Zhu*. Testing Evolutionary and Reinforcement Learning Approaches to Traffic Flow Optimization in SUMO AAAI Workshop on AI for Urban Planning. [Link]

* = senior author

Research Experience

Project Manager, Lead Machine Learning Researcher, Launchpad (UC Berkeley)

Lead theorist and developer for project aiming to implement a quantum attention mechanism for a transformer model.

Leading a team of 8 engineers, building a classical and quantum transformer from scratch, teaching both fundamental transformer architecture and quantum components, including attention, positional encoding, time complexity analysis, state encoding, entanglement strategies, and error correction.

Berkeley, CA, August 2024 -- December 2024

Lead Machine Learning Researcher, Language, Intelligence, and Thought Lab, Georgia Institute of Technology (Led by Anna Ivanova)

Sole theorist and developer for project aiming to implement passive protolinguistic abilities from a purely Hebbian neural network.

Developed novel associational training paradigm without backpropogation for classification, achieving high MNIST and CIFAR-10 accuracy.

Atlanta, GA, May 2024 -- Present

Lead Computational Linguistics Researcher, EvLab, Massachusetts Institute of Technology (Led by Evelina Fedorenko)

First-authorship study on linguistic relativity in humans and distributional semantic space; conducted MRI scans and analyses.

Cambridge, MA, May 2023 -- August 2024

Teaching Experience

Head of Education, Launchpad (UC Berkeley)

Creating and teaching internal and external machine learning lectures on topics including GANs, CNNs/ResNets, Autoencoders, RNNs, transformers, CLIP, MLPs, NeRFs, (deep) RL, evolutionary algorithms, diffusion, MCTS, alignment, and interpretability.

Created and administered technical interview for applicants on Haar cascades for object detection.

Berkeley, CA, May 2024 -- December 2024

Mentor, Senior Machine Learning Researcher, Algoverse

Senior author role for and mentor to 15 student teams on machine learning research projects for conference submissions.

Featured projects in progress, under review, or accepted for publication include: RL and evolutionary algorithms for traffic flow optimization, real or simulated conversation between two/one LLM(s) for reasoning, K-Fold Cross Validation for symbolic regression, and comparing human and machine melanoma classification patterns.

Remote, June 2024 -- October 2024

Course Instructor, UC Berkeley

Created, taught, and managed Linguistics 198 course on linguistic relativity to more than 50 students.

Berkeley, CA, January 2023 -- May 2023

Projects

Qompose | Quantum Machine Learning, Transformer Models

[Educational Project Spec], Ongoing

Testing theoretical advantages of attention operation on an entangled sequence, including capturing all highest-order token interactions in a single layer at O(n3) time, compared to O(dn) in classical transformers with equivalent information capacity.

Testing performance compared to classical model on sequential function extrapolation and interpolation.

Hebbian Language Program | Hebbian Neural Networks, Theoretical Linguistics

[Informal Project Presentation], Ongoing

Theorizing, developing, and evaluating purely Hebbian implementation of protolinguistic abilities.

Achieved >99% accuracy on MNIST, CIFAR classification without backpropogation, using only Hebbian learning.

Achieved primitive compositionality by reconstructing unseen color-digit pairs from generated Colored MNIST dataset.

Testing theoretical neuronal implementation of syntax; adding agentive outputs for RL and/or evolutionary learning environment.

Technical Skills

Languages: Python, Taichi Lang, R, JavaScript, HTML/CSS, PHP, SQL, Bash

Frameworks & Libraries: PyTorch, NeuroTorch, TensorFlow, qiskit, PennyLane, TensorFlow Quantum, Optuna, pandas, NumPy, Matplotlib, seaborn, scikit-learn, FastAPI, FastText, Gensim, jQuery