Tech Thresholds

In 2016, optimistic founders thought that general self-driving cars were 2-3 years away. In 2024, we don’t have general self-driving cars. What happened? General self-driving cars are bottlenecked by the intelligence of the autonomy system. Founders thought that they just needed intelligence level x, but they actually needed (a) a scalable algorithm to get intelligence from compute, and (b) an intelligence level 5x. (number made up) Both were not possible at the time....

April 29, 2024

explaining experiment results

An important job of a scientist is to explain results of experiments. Once you have an explanation, you can design further experiments. Without good explanations, you will design bad experiments and fail. Using Bayes rule, the formula for an explanation given results of experiments is: P(explanation | experiment_results) = P(experiment_results | explanation) * P(explanation) / P(experiment_results) = P (experiment_results | explanation) * P(explanation) / \sum_{explanation_i}^{all_explanations} P(experiment_result | explanation_i) P(explanation_i) Ok so what does it mean?...

April 3, 2024

Obvious things in hindsight

Simplifying complicated concepts makes them easier to understand and extend. Extending ideas is risky and valuable. You leave a lot on the table when you don’t pay attention. Understand when to be arrogant and when to be humble. Be capable of arrogance. People remember kindness. Your (perceived) energy is what people pick up on in social situations. Determination matters. The goal of a workout routine is to minimize the mental headspace it takes....

January 10, 2024

A Correct Prediction

I am obsessed with the predictions that GPT-4 or something like it was coming in the early 2020s. Ray Kurzweil was able to predict this in 2005. The simple reasoning is this. The model architecture for a computer, like the human brain, is not hard to find. Moore’s law will not bend, so we will have compute at the level of the human brain by 2020-2025. So we can make a computer like a human brain around this time....

October 13, 2023

blinders off

Dalton Caldwell and Paul Graham have pointed to this idea in startups of “having blinders on” during idea generation. They have found that founders cannot see what is right in front of their eyes due to blinders - obstructions that block them from seeing the truth. This phenomenon usually occurs when deciding what to work on while pivoting and developing an initial product. The blinders arise for different reasons. Some examples:...

July 29, 2023

user experience and AI capability - value isn't all delight

tldr; for many old UX’s, there exists an AI capability threshold that amplifies the value prop Everyone has had a good experience with a product recently. When your AirPods arrive in the mail, you flick the case open and place them tentatively in your ears. The noise-canceling headphones stream Anderson Paak in his distinctive soulful sound. A good product makes an indelible impression on its user - initially in terms of delight but also value over time....

July 12, 2023

Copilot

Here are rough notes (to self) on the copilot story. Source knew that they wanted to build something using GPT-3 started prototyping: demo’s were fabulous demo being good is not a sufficient condition models were not good enough for chat interface - 25% answer that i love, 75% it was garbage code synthesis - synthesizing large function calls - not that satisfying small scale autocomplete with the large models -intellisense dropdown UI UI was not the right thing User would get multiple options for the function body - read and pick the right one use the human feedback to improve the model reasons this was bad hit a key to request it wait for it to come back read three functions and click the right one - too much cognitive effort result was that none of them were good or you didn’t know lots of effort on the user but not a lot coming out of it Alex said to use the cursor position in the AST to figure out where you are in the code if you are at the beginning, complete the whole block if you are in the middle, just complete one line automatically generated with no user interaction model was small enough to be low-latency but big enough to be accurate only once all of these pieces were in place did the median new user loved copilot other dead-ends too along the way....

July 7, 2023

Succession

Spoiler alert: tons of spoilers for the TV show succession I started and stopped watching succession about three times before I got into it. The reason was simple: all of the characters were unlikeable. Then I got into it. And the reason why succession is good is because you can see competition for leadership by children who do not deserve it. And that’s relatable. The following sequence occurs in the final episode...

June 30, 2023

Unconvincing

There’s something revolting about unconvincing ambitious people - those that claim there’s a billion dollar opportunity somewhere but with transparent naivete and thoughtlessness.

June 22, 2023

Laws of Preferences

Creating general laws in the real world is hard. Talk to an median investor, and you’ll hear confident advice about some startup subproblems. And the advice is regularly valuable. But occasionally, you’ll listen to a heuristic that is not well qualified or appropriately followed up with the indispensable, “but this is just a data point.” And you’ll ask yourself whether you should change course based on the advice. The mechanism by which VCs or successful entrepreneurs give confident advice is well understood - they are in positions of power and wealth, and tons of young people look to them for advice....

June 12, 2023