Chaotic Bifurcations in the Logistic Map
A little TIL notebook on chaotic bifurfcations in the logistic map.



Hi, I'm Alexis! Please get in touch if you're curious about anything I'm writing about here.
I work at Answer.AI, figuring out how to make AI more useful. Previously at Google and various startups, I like learning how things work and trying to make objects and explanations absolutely and perfectly clear. AI is still confusing as hell so it's great fun!
Recently I've been building AI legal tools, and podcasting about our AI notebooks.
A little TIL notebook on chaotic bifurfcations in the logistic map.
AI chat regarding Peter Wessel Zapffe’s The Last Messiah, Herzog’s penuin, modern philosophical pessimism, and whether it is ever much more than a dramatic gesture.
2025 was the year of vibecoding and AI agents. But the most improbable part of the year was the discovery that Claude Code, an old-school, text-based, command-line app was the ideal form factor for futuristic agentic workflows. Why did it happen this way? Here’s my explanation.”
This is walkthrough and video showing how to use emacs within SolveIt. Then you can run lisp in SolveIt and use the SolveIt AI to inspect emacs buffers. But it’s mainly an excuse to point out the commonalities between SolveIt & Python, and emacs & lisp, the new and the old of live programming environments. Also available as an importable ShareIt. Episode 7 of 15-Minute ShareIt.
Michael Smith writes about the distinction between tools that make us smarter vs dumber. I thought the comparison between abacuses and calculators was memorable:
Learning how to use an abacus trains your brain to internalize it. Arithmetic becomes faster and more reliable over time, and the mechanisms behind why different strategies work become obvious and intuitive. Eventually you don’t even need the physical abacus anymore. Whereas with a calculator … those mental skills sort of fade away over time. And you will always need a calculator for math: it never becomes part of you the way an abacus does.
This topic has many dimensions, which means it lends itself a little too easily to simplification.
Obviously, enlightening tools are better than stultifying ones. And obviously, certain educational benefits only come through unpleasant hard work. So lets have demanding tools that educate us.
However, also obviously, there’s value in tools that are easy to use. So let’s make tools pleasant and effortless, and save education for classrooms.
Also, somewhat obviously, new tools are often not simply easier. They make one kind of difficulty go away but introduce a new kind of difficulty, which is educational in a new way. Right now, for instance, there exist people who are expert at writing code, but who are so bad at prompting LLMs to generate good code that they still claim it cannot be done!
In other words, there are a lot of obviously true points at play but they all point in different directions. Analogies are great in this situation, because every analogy serves as a specific, memorable peg for a particular set of tradeoffs.
So let’s follow the analogy. The idea is, an abacus is better than the calculator because it helps you internalize arithmetic. I buy that idea. That’s why I have a slide rule by my desk, in the hope it will help me internalize logarithmic relationships. (It’s not working.)
But…what are you really internalizing? Memorably, Feyman tells a story about initially losing in a mental calculation competition vs an abacus salesmen, but then ultimately winning as the problems because more complex, specifically because the abacus encouraged a mental skill whih was too rote and procedural, and did not promote higher-level insight:
A few weeks later, the man came into the cocktail lounge of the hotel I was staying at. He recognized me and came over. “Tell me,” he said, “how were you able to do that cube-root problem so fast?”
I started to explain that it was an approximate method, and had to do with the percentage of error. “Suppose you had given me 28. Now the cube root of 27 is 3 …”
He picks up his abacus: zzzzzzzzzzzzzzz— “Oh yes,” he says.
I realized something: he doesn’t know numbers. With the abacus, you don’t have to memorize a lot of arithmetic combinations; all you have to do is to learn to push the little beads up and down. You don’t have to memorize 9+7=16; you just know that when you add 9, you push a ten’s bead up and pull a one’s bead down. So we’re slower at basic arithmetic, but we know numbers.
Right now, many worry that LLMs will make us get worse at writing code. I think they probably will. But they may also be inviting us to get better at something deeper.
This is walkthrough on implementing styled components in FastHTML, within SolveIt. Also available as an importable ShareIt notebook. Episode 5 of 15-Minute ShareIt.
This is a provocative analogy:
I’m skeptical that hyper-scale LLMs have a viable long-term future. They are the Apollo Moon missions of “AI”. In the end, quite probably just not worth it. Maybe we’ll get to visit them in the museums their data centres might become?
The whole post is worth a read and I do agree with some of it. The main point is that the hard part of software development is not necessarily the coding, but “turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous”. That’s quite true.
But I find, LLMs help with that too. A lot! So it’s a false distinction to separate the thinking from the coding, and to say they don’t help with thinking.
It is true that AI tools are random and unreliable in a way that earlier abstraction technologies, like the compiler, were not. But I don’t think that distinction will matter very much in the long run. We will get better at handling imperfectly reliable AI tools, just as managers get good at handling imperfectly reliable human beings
So I think the post underestimates the value of the practical frontier LLMs, both in the future and right now.
Also, what does the analogy really imply? The moonshot was a world-historical achievement — by my reckoning, the most significant historical event of the last millenium. And even if we didn’t go back to the moon, we all use space technology indirectly every day. When Apollo 11 landed, there were a few hundred satellites in orbit. Now, there are nearly ten thousand. It’s quite possible Jason relied on the communication satellites in orbit today to publish his post.
Two weeks ago around 3am I couldn’t sleep so I was browsing twitter (bad habit). I ran into this tweet.
In fact I have a soft spot in my heart for bettermotherfuckingwebsite. I used its spartan, bare bones wisdom as the starting point for my original site a few years ago. So I groggily thought, I should reply with a page for HTMX (the JavaScript library for HTML-oriented web development). So I bought a domain and went back to sleep.
The next morning I woke up, remembered what I had done, and vibed out a website. I used Claude for a variety of tasks:
This allowed me to reply to the original tweet with a website as a punchline. Behold!
Okay, it’s not Mark Twain. But this took less than two hours!
To frequent model users, it may not be news that you can use just one tool (Claude Code in this case, but I could have used SolveIt) to do so many different kinds of work so quickly.
But I still thought it was neat, so I recorded a dev chat with my colleauge Erik about it.
Later it briefly ended up on the front page of hacker news. If you’re curious about the workflow for this sort of thing, I used Simon Willison’s new Claude export tool to export the chat transcripts warts-and-all, and the site is open source.
In fact, in the transcripts, you can even see my cringeworthy attempts to figure out how I should retweet it, and to fret over the merit of criticism there that I was wasting people’s time by pushing AI slop into the world.
I do a feel a little bad about that. But hey, I didn’t post it on Hacker News! I just replied to a tweet, and started a conversation. And now I have atoned for my sins, by writing every goddamn word of this blog post by hand, like a cave man, or like William Shakespeare.
fastmigrate is a library and tool for database migrations, where migrations are nothing but a set of well-named scripts. This post explain what database migrations are, what problem they solve, and how to use fastmigrate for migrations in sqlite.
I want to experiment more with local models to understand their limits, so I want them to be easy to install and run. That suggests using ollama. I don’t have a beefy MacBook Pro, so I’d like to run them on my local Linux server. Here are instructions for setting up ollama on a local Debian server, accessible from your laptop on the same local subnet.
Introducing ModernBERT, a family of state-of-the-art encoder-only models representing improvements over older generation encoders across the board, with 8192 sequence length, better downstream performance and much faster processing. Available as a slot-in replacement for any BERT-like models.
Nate Cooper’s ShellSage is one of the coolest pieces of tech to come out of AnswerAI recently. Using it with iTerm creates a magical experience.
In CUDA Mode 2024 hackathon, Nate Cook and I stumbled into vibecoding before it got that name. Using a then-secret AnswerAI tool, AIMagic, we relied completely on AI to generate a stable diffusion library in C. We were amazed how well this worked and placed in the top ten of the hackathon. This post, written at the time, prefigures the discoveries and debates which would span 2025.
What do transformer-based AI models actually learn? Can they solve complex problems by reasoning systematically through multiple steps? The Faith and Fate paper (Dziri et al. 2023) suggests answers: they often succeed by pattern matching, not systematic reasoning.