Curricula Vitae

George Flint Resume

George Flint

georgeflint@berkeley.edu | georgeflint.com

Education

University of California, Berkeley

Bachelor of Arts in Cognitive Science, GPA: 3.9 / 4.0

Berkeley, CA, Class of 2026

Relevant Coursework: Advanced Syntax*, Advanced Compositionality*, Computational Models of Cognition, Linear Algebra, Signals, Systems and Inference

* = graduate coursework, † = MIT OpenCourseWare

Publications

Aalok Sathe, George Flint, Evelina Fedorenko*, Noga Zaslavsky*. Language use is only sparsely compositional: The case of English adjective-noun phrases in humans and large language models Cognitive Science (Proceedings). [Link]

George Flint, Anna Ivanova. Testing a Distributional Semantics Account of Grammatical Gender Effects on Semantic Gender Perception Cognitive Science (Proceedings). [Link]

Dominic Domingo, Aryan Bandi, Arya Kunisetty, Ahan Banerjee, George Flint*, Kevin Zhu*. Testing Evolutionary and Reinforcement Learning Approaches to Traffic Flow Optimization in SUMO AAAI Workshop on AI for Urban Planning. [Link]

* = senior author

Research Experience

Project Manager, Lead Machine Learning Researcher, Launchpad (UC Berkeley)

Lead theorist and developer for quantum machine learning project (see "Qompose" below).

Leading a team of 8 engineers, building a classical and quantum transformer from scratch, teaching both fundamental transformer architecture and quantum components, including attention, positional encoding, time complexity analysis, state encoding, entanglement strategies, and error correction.

Berkeley, CA, August 2024 -- Present

Lead Machine Learning Researcher, Language, Intelligence, and Thought Lab, Georgia Institute of Technology (Led by Anna Ivanova)

Sole theorist and developer implementing linguistic abilities in a Hebbian neural network (see "Hebbian Language Program" below).

On MNIST, achieved accurate classification (75.87% vs 85.59% supervised) and image-reconstruction without backpropogation.

On generated colored-MNIST dataset, achieved compositional generalization to unseen color-digit pairs for classification (70.1% ID, 26.5% OOD vs 84.1%, 28.4% supervised) and image-reconstruction.

Atlanta, GA, May 2024 -- Present

Lead Computational Linguistics Researcher, EvLab, Massachusetts Institute of Technology (Led by Evelina Fedorenko)

Lead researcher and developer on study investigating linguistic relativity in humans and the distributional semantic space.

Designed human behavioral experiments (n=500), developed code for distributional experiments, conducted MRI scans and analyses.

Cambridge, MA, May 2023 -- August 2024

Teaching Experience

Head of Education, Launchpad (UC Berkeley)

Designed and taught internal and external machine learning lectures on topics including NLP, GANs, CNNs/ResNets, Autoencoders, RNNs, transformers, LLMs CLIP, MLPs, NeRFs, (deep) RL, evolutionary algorithms, diffusion, MCTS, alignment, and interpretability.

Designed and administered technical interview for applicants on Haar cascades for object detection.

Berkeley, CA, May 2024 -- December 2024

Mentor, Senior Machine Learning Researcher, Algoverse

Mentor to student teams on over two dozen machine learning research projects for conference submissions.

Featured projects include: traffic flow optimization with RL/evolutionary algorithms, LLM-based reasoning systems, symbolic regression improvements, and machine melanoma classification.

Remote, June 2024 -- October 2024

Course Instructor, UC Berkeley

Created, taught, and managed new Linguistics 198 course on linguistic relativity to over 50 students.

Berkeley, CA, January 2023 -- May 2023

Projects in Progress

Qompose | Quantum Machine Learning, Transformer Models

[Link to Showcase]

Testing theoretical advantages of attention operation on an entangled sequence, reducing attention time complexity O(n²) to O(n).

Achieved proof of concept on NLP task; current paradigms include n-body problem approximation and natural language processing.

Hebbian Language Program | Hebbian Neural Networks, Theoretical Linguistics

[Link to Preprint]

Theorizing, developing, and evaluating purely Hebbian implementation of primitive linguistic abilities, including signifier-meaning mappings, lingusitic compositionality, and syntacticity.

Quantifying Iconicity | Computational Linguistics

Testing alignment of semantic, phonetic, and logographic similarity spaces across languages.

Technical Skills

Programming Languages: Python, Taichi Lang, R, JavaScript, HTML/CSS, PHP, SQL, Bash

Frameworks & Libraries: PyTorch, NeuroTorch, TensorFlow, qiskit, PennyLane, TensorFlow Quantum, Optuna, pandas, NumPy, Matplotlib, seaborn, scikit-learn, FastAPI, FastText, Gensim, jQuery, praat-parselmouth