Luck
13 August 2023
Naval Ravikant has a useful model for luck. I've summarized the framework into four types of luck: blind, prepared, inertial, earned.
Blind luck is luck from pure chance. It is the result of purely external factors, none of which you influence or control. Examples of blind luck can include being in the right place at the right time, luck of the draw, or winning the genetic lottery.
Prepared luck is luck from being able to both recognize and execute on an opportunity. An example of prepared luck is developing skills and knowledge in a technical subject that one day becomes the focal point of a valuable new technology. Recognizing you have the skills and stepping up to lead a team to build out the new technology is prepared luck.
Inertial luck is luck from iteration or continuous expansion in new directions. By increasing engagement and interaction in different areas, one creates more surface area from which luck can appear. Examples of inertial luck can include inventors, serial entrepreneurs or networking.
Earned luck is luck from being recognized or known for something specific so that luck finds you. An example of earned luck is developing a reputation as the most helpful early stage startup investor, resulting in founders of the next SpaceX or AirBnB wanting to pitch to you first or even offering favorable terms because of the value you're known to bring.
Machine Learning and Brain Learning
10 October 2022
Machine learning is a useful heuristic in conceptually understanding how people (and animals) develop and update their base realities. The overly simplified explanation of ML is that computers take in a ton of data, some subset of which is appropriately labeled by an authority (often a human). The computer processes the labeled data in a way that allows it to guess, with some high probability of accuracy, at labeling (recognizing) future content. The more this process is repeated (training the model), the better the ML algorithm gets at its task.
It's interesting how little data or repetitive labeling is required for a human to learn something compared to how much it takes to teach a computer. Next time you're around a child, take time to observe how they interact with their environment. You'll notice they look at the world differently and try out thing in unique ways. Children's minds haven't been molded to a particular form. They aren't jaded or narrow-minded. They're not just focused on the "important" stuff. Another way of thinking of it is, they haven't developed heuristics to process things quickly. Every moment is filled with data processing and machine learning refinement going on in their brains.
You can tell that a child's brain/machine learning gets more specific over time. Parents to very young children generally explain the world using broad terms, like pet, or food. Over time, sub-classifications emerge, such as dog and cat or dinner and breakfast. Of course there's no limit to how granular you can get in defining groups of things. You could say fruit, or apple, or Golden Delicious. It takes a deeper understanding to be able to distinguish between more and more distinct groupings within a given category. This is again an example of how the brain mimics machine learning in its journey to define the world.
Brain machine learning isn't just visual. It can apply to literally anything that's taken in through senses. Another good example is definitions for words. Rarely do people learn what a word means by looking it up in Websters dictionary. Over time, we develop a strong or weak sense for what a word means through repeated context. Interestingly, just because of differing experiences, it can be the case that two people have very different internal definitions for the same word. This is a result of their machine learning algorithms processing differing datasets over the course of their lives.
This analogy for how our brains learn may also explain why people have "gut feelings", or a "sixth sense". Often times there are tiny signals, or bits of information that a person has unconsciously digested in the past, which trigger some reaction or response in the present. It's what can give them that gut feeling.
Some of the most successful people say they rely a lot on their intuition. It's likely that intuition is much less of a biological "feeling" of what to do than it is a learned response to subconscious inputs from a collection of experiences in a person's past. Successful people may just be quicker to adapt more information or small but meaningful changes, which results in their heightened intuition - a valuable asset and catalyst to their success.
On the other hand, consider when you're out for a walk and come across a snake in the path. You might get startled as a natural reaction before realizing the snake was only a serpentine looking branch. Though you're not in any real danger, something biological was triggered the moment your brain identified the object as a potential threat. This example illustrates the interesting question of how responses like a natural fear of snakes are passed on through DNA. Clearly they're not the result of a machine learning-like process. It's a behavior that presumably took many generations to develop, but is somehow passed on, while other learned behavior is not, such as speaking a foreign language.