Loading...

Follow Scott H Young on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Biases distort our perception of reality.

Consider survivorship bias. This is the fact that when viewing a group, you often only have the ability to look at the success stories, not the losers. As a result, you may be deceived that certain traits matter because they show up in all successes (even if they also show up in a lot of the failures too).

Or consider the friends’ paradox. This occurs because your assessment of what is common amongst your peers is based on what your friends do. However some people have more friends than others, which means that when you take an average of your friends, the more extroverted people get counted more. This leads to perceptions of drinking and promiscuity being higher than the actual average, due to a distortion in who you look at.

These biases are real, but today I’d like to look at a different one: achievement bias.

Achievement Bias: You Only Hear About Those Obsessed with Success

Consider two friends. One decides that becoming a successful scientist is the most important goal in life, works non-stop and eventually becomes a Nobel-prize winner. The other decides that a nice, comfortable family life matters more and earns a respectable income, but never becomes world famous.

Question: whose biography are you going to read?

The peril of achievement bias is that achievement crowds out other values when considering what makes a good life, as seen in the lives of others. If we assume that valuing achievement is at least correlated with achieving it, the result is that those who achieve great success disproportionately value achievement compared to the average.

There are two major dangers to achievement bias.

The first is that it makes the pursuit of success look more common than it really is. Since you only see and read from the minority of high-achievers, it can create the impression that most people also value success to the same degree.

The second danger is that those who don’t value achievement at all are silent. In other words, if you value something that is incompatible with success, your story is unsung and nobody will hear about it. This can mean that the virtues of a life not devoted to striving are missing.

Why Even Famous Non-Achievers Over-Achieve

The real funny part of achievement bias comes when it deals with the “success” of speakers, thinkers and philosophers who pride themselves on virtues other than achievement.

Nearly every famous speaker and writer on spirituality, whose work argues in favor of well-being over material success, is nonetheless famous and successful. Which means that those people are either accidentally successful, or they are at least selectively hypocritical to their own message—focusing on massive outreach, book deals, speaking tours and seminars over inner peace.

This isn’t to rebuke those people. In some ways, I laud them for having the ability to tell a much-needed message despite the fact that achieving such a message often times counters to the philosophy they try to espouse.

My point isn’t to criticize the hypocrisy but point out that even with mild hypocrisy the promotion of achievement as a virtue is probably excessive.

I’m just as wrapped up in this as anyone else. I also value achievement very highly. That valuing has put work central to my life in times when it wouldn’t have been for other people. That same prioritization also means you’re reading my words today. Thus, the advice you receive from me is, at least in part, a factor of my personality and values rather than a truly objective assessment of what is best for you.

Why Might an Overemphasis on Achievement Be Bad?

Life is full of trade-offs. You can sleep in or wake up early to work. Spend money on that fancy trip or put those dollars away for retirement. Stick to your diet, or enjoy that pizza.

Achievement, like all other good things in life, must also trade-off against other values at the extremes. Albert Einstein was a great scientist, but he also had a lousy marriage. While I think the world is better off from his dedication to science, I’m not sure his wife would have agreed.

Prioritizing achievement itself is a valid choice. The problem with achievement bias is that if you want to learn and study from people who aren’t in your immediate vicinity, it’s difficult to learn from anyone who didn’t prioritize achievement in this way.

Is there a hidden subset of the population that are happy and unremarkable? Perhaps these exemplars are worth learning from more than just the people who made millions by obsessing themselves with success.

How Do You Counteract Achievement Bias?

Achievement bias is like all other cognitive biases, easy to point out, very difficult to overcome.

For every pundit that heckles, “Survivorship bias!” when someone talks about the morning routines of famous CEOs, I want to yell back, “But what are people supposed to do?” The fact is that even if there is a survivorship bias, gathering up all the people who weren’t successful and statistically controlling for their results is an enormous project that would require teams of trained researchers. Not something an ordinary person can actively counteract.

Achievement bias is similar, because it’s harder to argue the opposite. Writing is a skill. Like most skills it takes years of work to get good at, the kind of thing that someone who values achievement might do. If you don’t value achievement, and don’t get really good at writing, you probably won’t be very articulate arguing in favor of non-achievement as a virtue.

I think this bias probably means we should consider non-achievers words more carefully, especially since those voices are rarer. It also means we ought to talk to more everyday people and not merely look up to the most famous and successful for all worldly wisdom.

The post Do You Even Want Success? (The Perils of Achievement Bias) appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Scott H Young by Scott Young - 4d ago

Be yourself. Be genuine. Show people who you really are. Our culture is obsessed with authenticity. But what if the entire idea of being authentic is based on a lie?

There’s a pervasive belief people hold that the person they appear to be is not who they actually are. Underneath all the masks, there’s a real face that’s yours.

This is a myth. There is no “real” you. The “real” you is just another mask, and often one that gets in the way of you being a better person.

Why is the “Real” You Always Better?

The first thing you’ll notice whenever you hear someone talk about their “real” self is that it’s never negative. No matter what they’ve done, the “real” version of themselves is just hurting, or confused, or lashing out.

This already is a huge red flag. People are shitty all the time. They lie, cheat, steal and hurt others. They do this because, in many ways, we didn’t evolve to be perfect angels. We’re a mix of good and bad, although some of us are mixed in different ratios.

So if people are shitty, why is the “real” you always good?

I think the reason is that the “real” you is a self-serving deception. It’s a rationale that explains away all the bad things you’ve done by saying that it wasn’t the “real” you that did that, but something distinct from yourself. It wasn’t the “real” you that lied to your friend, but just something inauthentic you did because you were worried about being judged.

In this sense, the “real” you isn’t truer or more authentic. Rather it’s who you wish you were (or perhaps, more accurately, how you wish other people would see you). That this picture tends to be flattering should be less reason to trust it, not more.

Who Knows You Really?

In an interesting study, close friends were better at predicting some personality traits on a subsequent test than the subjects themselves.

This means that, in some real sense, the people around you who only observe your actions, the people who never see your innermost thoughts and self, nonetheless are better at estimating aspects of your personality than you are.

Consider two possible hypotheses. The first, and most popular, is that there’s a “real” you lurking underneath your behavior to which only you have access. Certainly, if this were true, you’d score much better on self-assessments than people around you, often people who only have access to more superficial layers of what you say and how you act?

Now, consider my hypothesis: that the “real” you is just as much a performance as everything else, except the audience isn’t other people—it’s you. Under this view, it’s no wonder other people around you cannot guess your personality better, they have less reason to deceive themselves about it!

How Your Real Self Gets in the Way of Genuine Growth

In some ways, the idea of a “real” self is a comforting delusion. It allows us to make peace with whatever bad things we’ve done in the past by divorcing them from who we “really” are.

However, this delusion can be damaging if it keeps you from making changes to the person you appear to be. When your “real” self is good, this can insulate you from needing to make changes to the “outer” you that other people witness. After all, if you’re a great person deep down, as long as people get to know you, it doesn’t matter so much that you’re an asshole to everyone else.

Removing that delusion can be painful. It can be painful to admit that the people we feel we are deep down is no more central to our existence than the way we act, speak and behave for the world to see. But removing that separation can also foster growth. Seeing how you’ve hurt people or lived a life less than ideal, and recognizing that you can’t rationalize those harms away, can also give you the impetus to change them.

What are You Really?

Selves, in the end, are just simple models of a more complex reality. That you have a name, personality, abilities, mannerisms, memories and preferences are just ways of simplifying a tremendously complicated reality.

Ultimately, who you are is just a story you tell. Sometimes, in the case of “real” selves, it’s a story you tell yourself (and often a self-serving one). Other times, who you are is a story you tell other people.

Any good writer or journalist will tell you that any event can have innumerable true stories told, depending on what is emphasized or omitted. That a story is merely true still leaves enough degrees of freedom that there can be wildly different impressions generated from it.

As the “real” you is just a story you tell yourself, you can also change it if you want to. Not just your future, but also your present and your past. Not through lying, but through recognizing that there were always countless true stories of yourself that you just aren’t used to telling yet.

The post There is No “Real” You appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Some ideas are so powerful and useful that once you understand them deeply, you start to see them everywhere. One of those ideas is the normal distribution.

Normal distributions, also known as bell curves or Gaussian functions look like this:

They have a fat lump in the middle and two “tails” that stretch off in either direction, getting smaller and smaller.

That these tails get smaller and smaller, however, is an understatement. Written out as a function, a normal curve trails off according to a decaying exponential, squared. If you recall the article on growth curves, this says that the tails get smaller not only by a little bit, but they get smaller even faster than exponential decay!

Why Do Normal Distributions Matter?

Normal distributions show up everywhere. Human height is (approximately) normal. Intelligence, as measured by IQ, is normal. Measurement errors tend to be normally distributed.

The question is why does this mathematical pattern show up so much.

The answer is that normal distributions come out of a process of averaging. A neat little idea from statistics known as the central limit theorem says that if you take any distribution you like (it doesn’t have to be normal) and pick groups of random elements from that distribution and then take their average and, voila, you get a normal distribution.

This is why normal distributions come up so often, not because there’s some mathematical conspiracy to make things follow a bell curve, but because this is the natural pattern that results whenever you are considering a process of averaging out many small effects.

Height is likely a good example of a normal distribution because it is caused by many different genes and environmental influences, each which have small effects on height that point in different directions. Add them all together and average them and you end up with something that looks normal.

Intelligence, being similarly highly polygenic, is also something distributed normally because little influences average out and so to be really smart usually requires having been lucky enough to get a ton of random elements all pointing in the same direction.

Fat Tails, Violations of Normality and Taleb

This understanding of normal distributions, as being a process of “averaging” out many small effects from any random distribution you like, wasn’t my first exposure to the idea of normal distributions.

In fact, my first exposure to the idea came from Nassim Nicholas Taleb (NNT) in his book, The Black Swan. NNT is somewhat famous for his heated polemics against popular academic ideas, and chief among them was his charge that scientists overuse normal distributions.

In particular, he argued, we often have distributions which have “fat tails” as he calls them. Remember the original normal distribution? It goes down even faster than exponential decay, so the tails are, mathematically speaking, quite slim. In real life, NNT argues, we tend to see events that are in the extreme tail ends more often than we would by normal distributions, and so this gives us too much confidence that rare “black swan” events are statistically impossible.

While I think there’s merit to NNT’s critique, I wish I had actually taken a statistics class before reading his book. He makes it sound as if there was no good reason for thinking in terms of normal distributions at all, and that it was just a convenient choice to make the math easier. While that can (sometimes) be true, the central limit theorem provides a convincing rationale for why approximately normal distributions are quite common.

Why the fat tails then?

Well the problem may have to do with the fact that many phenomena look like averaging most of the time, but can be swamped by one extreme effect. The genetic influence on intelligence may look like a bunch of small effects adding, except if you get an extra chromosome (a single change) you will have Downs Syndrome, which causes marked declines in cognitive ability. Similarly dwarfism or gigantism for height may be caused by a few rare mutations, rather than numerous genetic influences all pointing in the same direction by chance.

In other cases, averaging might not be the right way to think about a phenomenon at all. In financial markets, normal distributions may not adequately explain behavior because price movements aren’t caused by random, independent choices. If a dip starts to occur, others will change their behavior to compensate, either causing a rebound or a crash, and thus the conditions of normality may be violated.

How to Apply Normal Distributions

The first way to apply normal distributions is to ask yourself if the phenomenon your trying to understand can be seen as the “averaging” of a bunch of different, small, independent forces. If it is, a normal distribution may be a good approximation.

If the process is mostly averaging, but there are rare effects that aren’t averaging, you may get fat tails, where extreme outcomes happen more often than you’d predict by the normal distribution’s famously quickly declining tails.

If the process isn’t averaging out independent random actions at all, like stock market votes, you may not get something like a normal distribution at all, and thus you’re in a domain where prediction gets harder.

The value of a normal distribution is that it can give you a very good idea of what to expect when it applies. Even if you’re in a “fat tail” domain, you can still expect that the normal distribution will be a good approximation most of the time, as long as you’re not relying on it too precisely for more extreme events.

The post Useful Mental Model: Normal Distribution appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

My dad, who taught elementary school for thirty seven years, once had a kid in his class who would always interrupt the class saying, “I know that already!”

It was doubly annoying, he told me, because this kid clearly did not know it already. He was near the bottom of the class, and could probably have learned more if he had just sat and listened.

It’s an amusing story, but how many of us are acting like this kid, saying, “I know that already,” when we really ought to listen?

Why You Don’t “Know That Already”

A sentiment I’ve heard expressed often is that “I already know what to do, I just don’t do it.” The idea being that you don’t need more education, you just need to do the things you already know.

In a certain sense this is probably true. Eat plenty of veggies and not too much junk food. Save for retirement. Go to sleep early. Don’t yell at your spouse. These aren’t so surprising.

You know these things, but you probably slip on them more often than you’d like. Therefore, it makes sense that you might say you know these things already, and the disconnect is simply willpower, motivation or both.

Doing is a Kind of Knowing

Where I disagree with this common sentiment is that being able to do things is a kind of knowing. Being able to follow your plans, stick to habits, achieve hard goals and maintain your cool in tense situations are learnable skills.

Therefore, while a failure to make savings or eat your broccoli may not be because you don’t understand nutrition or finance, they still represent a gap in your knowledge. You don’t know how to do the things you need to.

To appreciate this, you have to see willpower, motivation and behavior as themselves being complex systems you don’t fully understand. Seen from this way, the fact that you don’t save for retirement, even though you really should, is because of a lack of knowledge. The knowledge you’re lacking isn’t that you’re unaware that saving is important, but that you don’t know how to make that change happen in your life.

How Do You Learn to Manage Yourself?

Getting this kind of knowing is, like learning all things, a dose of theory and whole lotta practice.

The theory itself isn’t too hard. Habits, goal-setting, motivation, confidence, self-conception, these are all topics you could probably understand well enough after reading 10-20 books. If you’re not sure where to start, here’s a few to get going:

The practice is more work. It’s not just that self-improvement is hard work, which of course, it is. Rather, the specific difficulty is the capacity for self-reflection. It’s not enough to just go out and do things—you need to observe what you’re doing and track the results.

Here are some questions to reflect upon:

  • When you have to work on something you’re unpleasant, how do you still put in the time?
  • When you need to sustain effort on something for years, without falter, how do you keep it up?
  • How do you make progress when you feel tired, or even exhausted, most of the time from your regular life?
  • How often do you start projects, only to get distracted and change targets a few weeks or months later?

These are just a few, but if you don’t have concrete systems in place for handling these common issues, I’d argue that you really can’t say that you, “know it already.”

Reframing Self-Control Problems as Self-Understanding Problems

There’s another, side-benefit to all this. I find that when you reframe a problem of self-control as one of not fully understanding yourself, it’s a lot easier to move forward.

Say you’re trying to exercise and you haven’t been going much. If that’s a “willpower” or “motivation” problem, what are you going to do about that? Probably nothing. If saying to yourself “Just do it!” doesn’t work, what then?

In contrast, if you view your problem with exercise as not having the right system to make it an automatic habit, all of a sudden you can ask more interesting questions. Why has my habit been hard to sustain? What could make it last longer in the future? What things have I been doing that make it harder to keep up?

I find that a similar leap is often responsible for success and failure with learning as well. If you see studying as mostly a self-control issue, rather than a complex system of both the cognitive acts of learning and habits and behaviors that go into making those things happen, you’ll spend most of your time beating yourself up for “not studying hard enough.”

The solution, therefore, is to admit you don’t actually know it already. That there are, in fact, many things you don’t know or understand about yourself. You can get much better results, if you only choose to learn and listen.

The post “I already know what to do, I just don’t do it.” I disagree. appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The best book I’ve read on sleep is Matthew Walker’s Why We Sleep. In it, he explains the importance of getting good sleep as well as offers suggestions for how to avoid problems of nighttime insomnia.

Here are a few strategies you can apply to sleep better:

1. Reduce light levels (especially blue and white light) before bed

The body uses two different hormone systems for signaling the need to sleep. One of these is managed by melatonin and is influenced by light levels. This creates our circadian rhythm of night and day. It’s also responsible for jet lag, as our mental clock is out of sync with the actual clock, resulting in struggling to sleep at night and napping all day whenever you have to travel overseas.

This melatonin system is influence by light. Unfortunately, in our modern environment we are constantly illuminated by bulbs and screens, making it easy for this system to get out of sync.

Tip: If you struggle getting to sleep early, make it a habit to use minimal lighting (or no lighting) an hour before bed. I often listen to audiobooks in a mostly dark room as I try to fall asleep. Avoid LED screens which have more blue light, and are more likely to trick your brain into thinking it’s daytime.

2. Avoid caffeine after noon (including decaf coffee)

Caffeine interacts with the second of the two hormone systems our bodies use for sleep. As we go longer without sleep we accumulate adenosine. Receptors watch for levels of adenosine and push us to sleep when it has been a long time without shut eye. Caffeine, in turn, temporarily “plugs” these receptors so they can’t deliver the sleep signal they normally would.

The problem, however, is that caffeine doesn’t actually remove adenosine from your body (or give the restorative benefits of sleep), so when it finally breaks down, all the adenosine that was present before comes back and can make you feel worse than before. Sometimes, this can lead to the urge to have a second cup (or fourth) in the afternoon, to push through the rest of the day.

Unfortunately this can also interfere with later sleep. Caffeine loses its immediate kick soon, but it has a surprisingly long half-life in the brain, meaning even hours after drinking it, there is still a non-trivial amount in your system. Decaf coffees, while having much less caffeine than normal, still have non-trivial amounts of caffeine, so a decaf after dinner might also make it harder to sleep.

3. Sleep the same time on weekdays and weekends

I know, I know. Easier said than done. Weekends are a good time for socializing, and who wants to be the person going home to bed at nine pm?

Still the benefits of a consistent sleeping schedule may make up for the occasional social interference. Staying up late, especially if you struggle to sleep in fully, can mean you’re not getting a full night’s rest on weekends. This is particularly true if you drink alcohol before sleeping, which can interfere with the brain processes of sleep that make it restorative. Do this regularly and it’s no wonder you’re always exhausted.

Sleeping habitually at the same time is a good way to prevent missing sleep.

4. Watch out for naps

If you struggle with falling asleep (or staying asleep) then the fatigue can push you to take naps in the day. However, as Walker points out in his book, napping can alleviate some of the adenosine build-up, which conversely, makes it harder to fall back asleep at night time.

I certainly struggle with this advice myself, as I often take quick naps in the early afternoon. I do think a short nap (15-20 minutes) is preferable to a long one, if only because it has a lesser impact on later sleeping.

5. Get help from your family

For some, going to sleep is an entirely independent choice. However for many of us (myself included) going to sleep usually means going to sleep with your spouse (or having them come in after while you’re already trying to sleep). Therefore, when working on a new sleeping habit, it’s important to communicate your goals and motivations. If you don’t talk about it, then the habit likely won’t last as you stay up to watch another episode of Stranger Things.

The post The 5 Keys to Falling Asleep On Time Every Night appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Most people don’t fail that much. They may not always succeed, but genuine, fall-flat-on-your-face failure is quite rare.

Instead, most efforts fizzle long before someone could call it failure.

Our plans don’t explode, but fizzle out and are forgotten. Instead of detonating and needing to pick up the pieces, they get pushed to the back of the closet. We expect to take them back out another day, but instead they collect cobwebs.

You Should Try to Fail More

Aim for success, obviously. But aim in such a way that it’s still possible to miss. Because when failure isn’t possible at all, the outcome is usually to fizzle out instead.

Try projects that might not work. Ask people out who might reject you. Learn something you might be bad at. Start a business that might not make any money.

Again, the goal isn’t failure itself, but to live in a way where failing is a possibility. Only that way will your successes mean anything at all.

The post Fear Fizzle More Than Failure appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’ve been recording a lot of podcast interviews for my upcoming book, Ultralearning. One of the recurring themes I’ve noticed in our conversations is that how people feel about learning is the overwhelming cause of the results they experience.

Yes intelligence, talent, great teachers and schools all matter.

But if you feel like learning something is too hard, scary or not interesting enough to merit the effort, none of those things will help you.

The people who fail to learn languages are, overwhelmingly, the people who don’t even try to learn a language. The people who “can’t” learn math, coding, business, marketing or dance, aren’t those who tried and failed—they never even attempted it seriously in the first place.

Learning is Frustration

It’s easy to be dismissive of this attitude. “Don’t those people realize that you can learn anything, as long as you’re persistent and use the right approach?”

But feelings aren’t rational, so taking a nagging stance to encourage people to learn hard things is a waste of time. If you feel like you can’t learn math, French or samba, me telling you differently won’t change things.

The truth is, I’ve had my own moments of doubt and frustration, and not even that long ago.

In January, I started learning salsa dancing with my wife. She has danced for years, although never salsa, whereas I’ve done very little.

Immediately, in the first classes, the overwhelming feeling was, “I hate this.” Not because I don’t think learning to salsa couldn’t be cool, but because I see myself in the mirror. My steps are out of sync. I’m not on the rhythm. When I do partner dancing, I forget how to do it, and then we switch partners before I get a chance to figure it out.

The thing is, I know this is just the frustration barrier. I know that once I get past that novice level and start being able to do it (which is going to happen with enough practice), I will start to enjoy it. If I put in enough time, I might even really love salsa dancing.

But that’s not how it feels. My brain sees me slightly underperforming and the immediate, visceral sense is: “You’re not good at this, you should stop right now and quit embarrassing yourself.”

How Do You Change Your Beliefs About Learning?

I think there’s a few approaches you can take to overcome these sorts of learning challenges:

1. Dive Straight In.

Ultralearning, in my opinion, often works well because it compresses the frustration barrier to a shorter period of time. Going no-English to learn a language is stressful, but the stress lasts for a couple weeks, rather than a couple years as it can in traditional classrooms.

Because the stress is short, you can more easily leap over it compared to the non-stop grind of emotional struggle you can feel when a skill never quite gets out of that frustration period.

2. Avoid Comparison.

My feelings about salsa were largely driven by my classmates. They were better than me. Whenever we, as human beings, sense a comparative disadvantage, it’s as if our brain immediately tries to avoid practicing the skill.

I’m not sure if this is an evolved adaptation towards specializing in our strengths (if so, it would have to predate our modern, specialized economy), or whether this is simply because being low-skilled is low-status and our status-seeking instincts override the long-term goals of learning.

However, one simple way to avoid this problem is to put yourself in projects or situations that defy comparison.

One-on-one tutoring immediately removes the “I’m the worst in the class” feeling. It also removes the “I’m the best in the class” laziness that can afflict high-performing students.

Even structuring a project that is intense and unusual often avoids this problem. When I was doing the MIT Challenge, I never felt bad about struggling with concepts or ideas because nobody else was doing this self-education thing so there was no expectation of performance.

3. Embrace the Frustration.

“I hate this,” isn’t a feeling—it’s a sentence. It’s a sentence you mentally utter in automatic response to certain things going on in your environment. However, recognize that this isn’t a single, unified experience, but several discrete experiences happening in lockstep:

  1. You notice you’re doing something badly.
  2. You notice that others may notice you’re doing something badly.
  3. You feel embarrassed, and start to feel bad.
  4. You feel like you need to escape or stop.
  5. You say to yourself, “I hate this.”

This is a train of thought that you can get off at any stop, you just choose to ride it all the way to the terminal station. If you’re mindful of it, you can set it on alternate tracks.

What if when you notice you’re doing badly, you reaffirmed, “But it’s okay, doing things badly is what learning is all about. That’s why I’m here.”?

Or what if you start to feel embarrassed and you say to yourself, “It’s okay if people think I’m bad at this. As long as I’m not hurting anyone and trying my best, nobody will hold it against me.”?

When you feel you need to escape you say, “Let’s just go a little bit longer.”?

As you examine it more closely, the feeling of frustration itself becomes a potential space for new experiences. You realize how much your own feelings of inadequacy straitjacket you into a limited view of your life. The pain you feel from doing badly, ironically, becomes a moment of potential liberation because through it you can rewrite the story of who you are.

Salsa Dancing and Overcoming Frustration

I’m still not great at salsa dancing. But I have gotten better. The moments where I say, “I hate this,” are now outnumbered by, “Hey, this is actually pretty fun, once you get the hang of it.”

I know, from learning other skills that “pretty fun” becomes “amazing and life-affirming” if you can keep going a little bit longer.

Learning, and ultralearning, to me represent the cultivation of these amazing, life-affirming moments. When you get good at something that previously felt impossible for you, your world becomes just a little bit bigger. This expansion of possibility, more than just achieving a goal, is the stuff of happiness itself.

P.S. – If you haven’t seen my book on learning hard things, check it out. It just might get you through your own frustration barrier!

The post The Hardest Part About Learning Hard Things appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Some ideas are so powerful and useful, that once you understand them deeply you start to see them everywhere. One of those ideas is exponential growth.

Exponential growth occurs in many systems, from animal populations to investment accounts. Our brains don’t seem very apt for reasoning exponentially. Instead, we tend to extrapolate things in ways that consistently underestimate how big exponential growth eventually gets.

That being said, true, undamped exponential growth rarely lasts forever. A rabbit population that doubles every six months would, in theory, fill the entire planet under a layer of rabbits miles thick in no time at all.

Yet we’re not all swimming in rabbits, so exponential growth tends to have limits. Understanding those limits is at least as important as understanding the underlying growth itself.

What is Exponential Growth?

Mathematically, exponential growth occurs whenever the speed something increases proportionally to how much there is at any moment.

The proportionality of growth to amount, is what leads to the great result from calculus, that the exponential function (to the base of e) is its own derivative. Growth = Quantity.

Qualitatively, exponential functions grow really fast.

In fact, if you take an exponential function out long enough, it eventually gets bigger than any finite polynomial. Polynomials are functions that take a quantity and square it, cube it, or raise it to any finite power. Even f(x) = x^12345 will eventually be smaller than an exponential function.

This fact about exponentials means that we often fail to predict just how large they will get. Even minuscule starting amounts and slow growth rates can start growing tremendously, if you give them enough time.

Where Do We See Exponential Growth?

True exponential growth occurs when the growth depends in some way on the amount you currently have.

Money in interest-earning accounts is a clear example. The amount of money you earn depends directly on the amount that is already in the account. That proportionality means you have exponential growth.

Populations of rabbits, where each pair of bunnies can make more babies, will also grow exponentially since the amount the population can grow depends on the amount that’s already there.

Exponential growth can also occur in other domains. Many businesses have exponential growth curves (at least for awhile) because being bigger gives proportionally more opportunities to get even bigger.

My friend James Clear likes to argue that habits are the exponential growth of personal improvement. Get 1% better every day, and you quickly get a lot better overall. It’s clear, like the start-ups or the rabbits, that such growth can’t go on forever, but there’s probably places where consistent small improvements really do compound.

Where Don’t You See Exponential Growth?

Not everything grows exponentially, so why not? As I’ve said in other lessons on this series of useful mental models, it’s at least as important to recognize when a pattern doesn’t match as when it does.

Exponential growth fails to apply when there is no longer a proportionality between the amount and the growth. For instance, fitness isn’t exponential growth because your improvement (in weight loss, pushups or running time) actually slows as you get better.

Similarly, you may experience exponential growth for awhile, and then other constraints may drop you out of exponential growth. If you’re a new business owner, each new opportunity exposes your business to new people, which fuels more growth. However, at a certain point, you may not be able to effectively handle all the new opportunities. As some go to waste, you drop out of exponential growth because future growth brings fewer opportunities you actually act on.

That sustained perpetual exponential growth almost never occurs is an important feature of exponential growth. All of our models of reality tend to be approximations. Because exponential growth reaches such enormous heights, considerations that were unimportant when the total amount was small eventually become very important.

At low amounts of gravity, Newton’s laws apply. At high amounts, space and time warp and you need Einstein. Those corrections are minuscule when dealing with everyday things, but if the Earth had exponentially growing mass, you’d have to quickly start worrying about those things.

A common pattern to look for as well, therefore, is one where exponential growth shifts into logarithmic or asymptotic growth. Logarithmic growth can continue forever, but it gets slower proportional to the amount, the opposite of exponential growth. Asymptotic growth will get closer and closer to a maximum value.

Friction, side-effects, waste and loss all usually make exponential curves hit a limit.

Money tends to be an exception to this, only insofar as few of us accumulate enough wealth for the exponential growth nature of money to stop being true. However, if you had a bank account accumulating interest for ten thousand years, it might become a substantial fraction of the economy, at which point who can really say whether you (or your descendants) could actually keep it from being broken up or stolen.

Applying Lessons of Exponential Growth

The first way to use this mental model is to learn to spot it. If you think a situation might have exponential growth, ask yourself, does an increase in the amount cause a proportional increase in the growth?

If it does, then you should consider that your naive extrapolations will likely underestimate how much growth you’re likely to experience. Exponential curves are hard to imagine, and so most people make linear extrapolations, even when those aren’t the best fit.

At the same time, be wary of assuming exponential curves will go on forever. They rarely do, either because they become so big that friction and waste start to slow them down, or because they’re only exponential across some limited range, switching back to slower growth curves or stopping entirely.

The post Useful Mental Model: Exponential Growth appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Human beings are naturally creative. But sometimes we struggle with getting started. Whether you are going to write a novel or starting to learn a musical instrument, here are five methods to be more creative:

1. Output comes from input.

If you want to have a lot of good ideas, you need to expose yourself to good ideas. This means reading books, having conversations with interesting people, seeking out new experiences, travel and more.

There’s often a trade-off between creativity and efficiency. An efficiency-minded approach would seek out things that directly impact the area you want to improve. Read a business book to improve your business. Read a fitness book to get in shape.

A creativity-minded approach often benefits from searching far broader, and then making disparate connections. While it’s unlikely that any single idea you have will be totally original, the combination of a few different ideas is often unique. To give an illustrative analogy, everyone has seen all the cards in a deck of playing cards. Yet shuffle those cards and the number of permutations is greater than the grains of sand in all the worlds beaches.

Breadth + connection = originality.

You can balance breadth and specificity by asking yourself whether it’s more important to have original ideas or useful ones. For useful, read direct. For original, read broad. 

2. Have a capture mechanism.

Creative ideas often come to you when you’re not deliberately trying to solve a problem. This is because when your attention is more relaxed, and your mind is wandering, it is often easier to associate more distant ideas than when you are using your attention to try to suppress distractions.

However, this can become a problem because often the moment of thinking of an idea is not the best time to work on an idea. Thus, your creative process must include a system to capture ideas when you have them, so you can work on them later.

The simplest mechanism is simply to have a list where you keep ideas. I keep a list on my phone, creatively titled, “Ideas” where I jot down any ideas for articles, business improvements, thoughts I want to follow up and more.

A more elaborate setup can also include specific subfolders for different types of ideas. You might have a folder for quotes, concepts, tactics, tools or whatever other ideas you want to encounter in your creative process.

3. Incubate your ideas.

Some ideas will appear in your mind, fully formed and ready to be implemented. Other ideas will appear as a fragment. You may have a piece of the puzzle but not be sure how it fits into a bigger picture.

I recommend regularly reviewing your ideas lists. I’ve had ideas sit on my list for months, if not years, before they became articles or implemented as business strategies. Incubation helps because just as a spontaneous connection can generate an idea, an incubated idea can spontaneously mature into a plan of action if you take care of it.

4. Have a pipeline for execution.

Ideas are worthless without implementation. Getting enough good ideas isn’t useful if you don’t actually implement them.

Pipelining is a methodology for working on more than one thing at a time, by having various stages of the process get worked on. In my own writing, I usually start with my ideas folder. Then some of those ideas (plus others I think of when I sit down to write) become article drafts. Then those sit in a different folder, sometimes for a day, sometimes for months. When it comes time to finish them, I pull them out, edit them, draw images and queue them up.

The different stages in the process, with time in between, allows space to think and edit. While I have written articles all in one go and hit publish, they often aren’t my best work.

5. Alternate between different creative “flows”

Creative acts generally require two different mindsets. This is unfortunate because they tend to work against each other and so it can often lead to lousy editing or writers block when one dominates the other.

The first is a generative flow where you let ideas come to you easily and you don’t look at them too critically. This mindset is expansive, relaxed, open and positive. The advantage of this flow is that you create a lot of ideas. The downside is that a lot of them are bad.

The second is a critical flow, where you edit and tear down the ideas you’ve made. You spot flaws. You fix weaknesses. You edit ruthlessly. This mindset is closed, focused, critical and precise. The advantage of this flow is that you can make your work a lot better. The downside is that it can often block you from thinking of new ideas because they get rejected too quickly.

Moods and flows can be influenced by environment, time of day and by conscious effort. Splitting up the pipelining phases of writing and editing often helps me because I can write exuberantly, and then later trim and cut down the excesses when I read it in a more sobering light later.

The post 5 Tricks to Never Run Out of Ideas Again appeared first on Scott H Young.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Few skills scare people away like coding. Television portrayals make it seem like writing computer code is a genius-level activity, as weird symbols race across the screen and techno music blares in the background.

But the truth is that coding is actually pretty easy.

I’m not saying this to dismiss the work of brilliant programmers. A skill can simultaneously be fairly easy to get the basics in, while also being really difficult to master. Everyone learns to write, few people learn to write well. There’s no contradiction, therefore, in saying that basic literacy is an “easy” skill to acquire (in that the vast majority of us are able to do it), without dismissing the efforts of talented writers.

Nor am I saying this to mock people who are trying to learn programming and find it frustrating.

Rather, I say coding is easy because I believe that almost all people, even if they don’t see themselves as particularly smart, have the ability to learn to write simple programs. That they don’t is mostly due to some structural barriers than any intrinsic difficulty with the skill itself.

My Experience Coding

I’ve been writing programs for over half my life. I’ve taken a couple university classes. I even worked through the content of a CS degree online (which is totally unnecessary if you want to learn to code, by the way).

In some ways, my background may seem to disqualify me about making statements on the ease of coding. However, I can say, without a doubt, that subjects like engineering, accounting, physics and law are more difficult than programming. Most math you learn in high-school is more difficult too, although you usually get waaay more practice with algebra than code, which often leads to a misperception as to which is more difficult.

Why Coding Feels Hard

Learning to code is hard for a couple reasons:

  1. Installing new languages is super frustrating. This is the first activity for a would-be programmer and, to this day, the thing I hate most about programming. This can create the misperception that programming is really hard because newbies extrapolate the difficulty of getting set up to how it will be every moment after.
  2. There are waay too many languages, tools, libraries and plug-ins. Starting programming is super overwhelming because there are a bajillion things to learn and you have no idea where to start.
  3. Early classes tend to be populated with people who have taught themselves programming before. Thus, you may think you’re not smart enough to program because of an unfair comparison. (I once knew a woman who got a masters in civil engineering, which is much harder than intro coding, and she told me she wasn’t smart enough to code because of her first class. This is bananas, and yet people fall for it because some nerdy kid has like ten years of experience before the class starts.)

The first moments of programming are the hardest. Getting set up is annoyingly difficult and often requires learning a new way of working with computers even before you write a single line of code.

Consider the instructions for installing most languages. Open the terminal or command prompt. Type in a case-sensitive exact set of instructions to download and install the language. Use GitHub. Homebrew. Versions matter too. Are you running 32 or 64 bit? ‘Cause if you’re not sure it will crash with a cryptic error message and you’ll feel like an idiot.

These tools are learnable, like everything else, but they reinforce the impression given by television that coding is mostly using esoteric tools with weird, unfriendly user interfaces. When people see coding they imagine parsing the green-streaming letters of the Matrix, whereas the truth is a lot more like following a recipe along step-by-step.

How Do You Get Over the Initial Difficulties?

There’s a few ways you can do this. My favorite is to just buy a book that will tell you exactly how to set up a language step-by-step. Most ones from the bookstore will teach you the installation process, and if you can be meticulous at following them you’ll usually be successful.

Alternatively, you can dive into internet tutorials, but recognize that sometimes they are aimed at already-proficient programmers, who know what Homebrew and GitHub are, and are fluent with writing commands into Terminal. If you get one of these, you can try to follow it, but don’t feel bad if you screw it up. It’s frustrating and it doesn’t mean all coding will be like this.

Another option is to avoid setup at all. Just use some coding tutorial website that teach you to code without needing any installation. I like this too, but sometimes you can’t actually build the thing you want to build with these apps. However, if you hate the setup, that’s where I’d start.

Most important, however, is to remember that I told you this. When you try to learn to code, it will be frustrating to get set up, and just accept that this is a small price to pay. Soon it will be easier and you’ll do stuff that is cool. Don’t feel dumb because you get stuck here, I still do and I’ve been writing code for years.

What Language/Tool Should You Start With?

This seems like a good question to ask, but, I’d argue, it’s actually the wrong way to think about learning programming.

First, despite the fact that programmers often boast about how many languages they know, recognize that most languages are only superficially different. Yes, I know all about language design, so don’t tell me about the importance of scripting versus compiling, or whether a language is strongly or weakly typed. Those things matter, but they’re details.

The basics of nearly all languages and tools are the same. Variables. Loops. Functions. Pointers. Stack. Trees. Hashing. Recursion. These concepts exist in most languages. If you pick a mainstream language, you’ll learn these in mostly the same way, so it doesn’t matter if you pick Ruby, Python, C++, Java or PHP.

The place languages do matter is what you want to use them for. Want to write iPhone apps? Swift or Objective C are the languages of choice. Need to create web pages? JavaScript is going to come in handy. Yes, you can use almost any language for any task if you tinker with it and get the right plug-ins. However, some languages are easier to get started with certain types of projects than others.

Therefore, the first question to ask is not: which language should I learn, but, which project should I start with?

What Should Be Your First Project?

I recommend starting your programming adventure, even before you write a single line of code, with a decision about a concrete programming project you’d like to create.

This serves a couple purposes:

  1. It narrows down the language/tool choices considerably. Once you know you’re building a website, you’re already leaning towards tools that were designed with that goal in mind.
  2. Everything you learn is connected to a destination. As I document extensively in my book, transfer of learning is notoriously difficult. Learning directly works better than learning something and just hoping it will help you later.
  3. You can work on something you think is cool. If you think it would be cool to make an interactive website, do that. If you’d prefer a game, do that. If you’d prefer to automate your accounting work so you’re done in half the time as your colleagues, do that instead. Do what you feel is cool and you’ll be motivated to stick to it.

In general, smaller is better when it comes to projects. Deciding to start with making the next Google is ambitious, but probably will get you stuck in the weeds before you make much progress.

If your true ambitious are huge, it’s often best to work on a toy project first. Many experienced programmers still do this when they are entering a new territory of programming. Toy projects take the essential ideas of programming, but turn it into something you can do in a few days or weeks, instead of years.

Examples of good projects include:

  • A basic text-adventure game. No graphics, but still requires learning concepts like loops, variables, input-output processing.
  • A simple website. Start with just displaying a static page. Maybe add in comments, users, photos or interactive elements as you learn more.
  • A simple app. What’s something dead-simple you’d like to have on your phone? It can be stupid to start, don’t worry.
  • A script for automating a tedious task.

In general, programs are easier when they involve no multimedia content (websites are a bit of an exception), so if you’re making a game, for instance, a text game tends to be easier than one with graphics, if only because making all the graphics can take time.

Similarly, different core activities have different difficulties. Arithmetic and simple calculations done repeatedly are easiest. Processing text for exact patterns is a bit harder. Processing text for vague patterns is harder still. Processing, speech, photos and video is even harder.

If you do end up picking an initial project that turns out to be super difficult, it’s okay to scale back. “Make an application that calculates my mortgage payments” is much, much easier than “Make an application that can tell you what someone’s hairstyle is from a photo.” It’s often not obvious that this is the case in the beginning, so don’t worry if you accidentally pick a “hard” problem to start with, you can adjust it later to something easier.

Which Resources Should You Use?

Once you’ve picked a project, the next step is to get some resources to help. This is a step many people worry over endlessly, but like the language choice, it’s a lot less important (and depends more on your goals) than you think.

I won’t list specific resources, because there are so many good ones that my suggestions are going to leave out some of the best. Instead, here’s three strategies for finding good resources:

  1. Get a book that teaches you the language + project you want. There should be a computer section in your local bookstore or library, you can just pick any of those. I like O’Reilly, but there are lots of good ones. (If you haven’t figured out which language yet, just Google your type of project and look for suggestions.)
  2. Attend a MOOC. Coursera, edX, MIT, Harvard, Stanford and others all teach computer programming online. Once again, the thing that matters most isn’t the exact class, but whether it teaches the language/project domain you care about.
  3. Take a tutorial program/website. Again, there are tons of these. I used Google’s for Python the first time I wanted to learn Python.

Stressing over which book or course to pick is the wrong thing to worry about. The main thing to do is to use the book or course to learn enough to start tinkering on your project, not to master programming on its own. Starting your project before your ready is definitely the way you should feel about it.

Do What Real Programmers Do and Ask Google

Once you actually get started writing code, you’re going to encounter many, many situations where you either don’t know how to do something, or you learned how to do it before and you’ve forgotten.

In these situations, you should do what real programmers do: ask Google. This isn’t a shameful activity, but a part of real programming. I’ve been coding for years, but I always forget silly syntactical things and so I find myself googling regular expressions over and over again. It’s not bad, it’s just part of the process.

Once you’ve gotten yourself installed, you’ve learned a bit with your book to know the basics and have started working on your first project, you’ll learn the rest by googling and adding to your library of programming knowledge. Computer science theories, detailed understandings of the language you’re working with or advanced design patterns can wait until you’ve finished a few real projects and feel like you can code something.

Caveat #1: Don’t Copy-and-Paste

A first piece of advice when it comes to this step is to never copy-and-paste. Copy-and-pasting is bad because you don’t try to understand the code you’re copying. If you have to transcribe, in contrast, you naturally ask yourself, “why this? why not something else?” Even if you don’t have a great answer immediately, typing things for yourself will open your mind up to the answer whereas copy-and-pasting shuts down thinking.

Caveat #2: Try it Yourself Before Looking for a Solution

A second piece of advice is to always try to solve something yourself before looking up how someone else does it. Most problems have many, many ways they can be solved. The challenge is that expert programmers often know a particularly concise, clever way, but that often isn’t the “obvious” way. This can lead to a trap where you see a clever solution that employs tricky syntax, think that there was no way you would have guessed that solution, and believe you couldn’t have solved it on your own. That’s usually not true, yet it’s an unfortunate side effect of looking up solutions before trying to find your own.

Should You Bother Learning Computer Science?

Computer science tends to be equated with programming, but in practice the two tend to be rather different. Comp sci tends to be a lot more discrete math, complicated algorithms and fundamental issues of computation. Programming is a lot more practical and hands on.

A lot of computer science professors, for instance, aren’t great programmers. This may sound like a professional failure, unless you realize that their job is mostly to prove things with a pencil and paper using math, rather than write usable code.

I did a whole project to learn computer science, so I happen to really like it. However, if your goal is mostly practical, I would not spend much time with it in the beginning. If you have done programming for a little while, and finished some real projects, then diving into advanced algorithms, theories of computation and discrete math topics can be really fascinating. But if you start with these, it can be easy to get overwhelmed.

Summary of Advice

This has been a longer post, so let me reiterate the final points before I go:

  1. Learning to code is much easier than most non-coders think, at least to make simple stuff.
  2. The main reason it seems hard is because:
    1. Setting up is frustrating.
    2. There are too many options to start.
    3. Early classes are populated with self-taught whiz-kids who make you feel dumb.
  3. To teach yourself you should:
    1. Decide what kind of thing you want to make (website, app, game, script, etc.) first, pick the language that fits best with what you want to make.
    2. Start immediately with a concrete project to build something small. Books, courses and tutorials should help you get to that point.
    3. When you get stuck, do what everyone else does and ask Google
      1. But don’t copy-and-paste the answers.
      2. And try to solve it for yourself first.
  4. Learn computer science classes only after making some things on your own.

If you can get through the initial frustration and build a little confidence, anyone can learn to code. What’s more, it’s a skill you can use throughout your life, even if you never become a full-time programmer.

The post Learning to Code is Easy: Here’s How to Teach Yourself appeared first on Scott H Young.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview