Getting Things Done & Subjective Freedom

Written on June 15, 2015. Written by .

I read David Allen’s book “Getting Things Done” several years ago and I’ve been using a simplified version of his system ever since. It feels efficient and I think it’s probably necessary to have a system like it to be highly productive.

The book doesn’t discuss the psychological implications of productivity optimization, however. At one point I took a step back and noticed that for the prior six months I had felt busy nearly every day. At the beginning of that period I had moved to a new city, so I thought it made sense that I would have a lot to do to settle in. But after six months, that excuse didn’t make sense anymore. In fact, not only did I feel busy, but I felt almost like a robot that was just executing the orders on the task list whenever I had any available time. No matter how hard I tried, I couldn’t seem to get the list of important action items below 8-10 items, and that just felt like too much. It actually discouraged me from doing other activities because I felt that I would prefer to get the tasks out of the way first.

One thing I noticed was that I was putting some tasks on the list that I never needed written down before, like getting groceries, doing laundry, cleaning, and buying supplies. Sometimes these would linger on the task list for a while because I didn’t have the energy or it wasn’t convenient. All of these were things that were important to do, but the act of planning them actually made me feel more busy.

The lesson I learned is that planning something in advance actually feels quite similar in many cases to having an obligation, even if you yourself made the plan. So for example, if you make plans to meet with your friends every day, you might still enjoy seeing them, but you will likely feel quite “busy” because you don’t get the feeling of subjective freedom that you would get if you had done the exact same thing spontaneously.

Unfortunately, most people are objectively fairly busy, which means it might be difficult to have a social life that doesn’t involve making advance plans. But we can try to be more spontaneous with our personal tasks.

I think everyone acknowledges that spontaneity is desirable, but the important observation for me is that spontaneity is the primitive mind’s definition of freedom. We may think that following our own plans is freedom, but the more primitive parts of our minds might not see it the same way that our conscious minds see it.

 

Read more from the Efficiency category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

Guns, Germs, and Steel

Written on April 18, 2015. Written by .

This is a truly amazing book about why some societies are more successful than others at developing technology and conquering the world. The author explains how geographical and environmental factors are highly determinant, which implies that genetic and cultural factors are much less significant. The most important factor in the early stages of civilization was access to domesticable plants and animals. The theory is very convincing and fascinating.

Read more from the Anthropology category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

Clean Code

Written on April 12, 2015. Written by .

This book provides a good overview of best practices for coding. I liked that it took concrete opinionated stances like:

1. “The only way to make the deadline—the only way to go fast—is to keep the code as clean as possible at all times.” (p6)

2. “Indeed, the ratio of time spent reading vs. writing is well over 10:1. We are constantly reading old code as part of the effort to write new code. … So if you want to go fast, it you want to get done quickly, if you want your code to be easy to write, make it easy to read.” (p14)

3. “Functions should not be 100 lines long. Functions should hardly ever be 20 lines long.” (p34)

4. “In order to make sure that our functions are doing “one thing,” we need to make sure that the statements within our function are all at the same level of abstraction.” (p36)

5. “Three arguments (triadic) should be avoided where possible. More than three (polyadic) requires very special justification—and then shouldn’t be used anyway.” (p40) I think this rule is far too strict, though I agree that new data structures should be created where appropriate to reduce the number of arguments.

6. “Flag arguments are ugly. Passing a boolean into a function is a truly terrible practice.” (p41) I think there are some cases where boolean arguments are acceptable, especially if they are expressed as a keyword argument.

7. “Command Query Separation: Functions should either do something or answer something, but not both.” (p45)

8. “Try/catch blocks are ugly on their own right. They confuse the structure of the code and mix error processing with normal processing. So it is better to extract the bodies of try and catch blocks out into functions of their own.” (p46)

9. “Duplication may be the root of all evil in software.” (p48)

10. “So if you keep your functions small, then the occasional multiple return, break, or continue statement does no harm and can sometimes even be more expressive than the single-entry, single-exit rule.” (p49)

11. “The proper use of comments is to compensate for our failure to express ourself in code. … though comments are sometimes necessary, we will expend significant energy to minimize them.” (p54)

12. “Variables should be declared as close to their usage as possible.” (p80)

13. “I have found, however, that this [horizontal] kind of alignment is not useful. The alignment seems to emphasize the wrong things and leads my eye away from the true intent.” (p87)

14. “Test code is just as important as production code. It is not a second-class citizen. It requires thought, design, and care. It must be kept as clean as production code.” (p124)

15. “It is not enough for code to work. Code that works is often badly broken. Programmers who satisfy themselves with merely working code are behaving unprofessionally.” (p250)

16. “Using floating point numbers to represent currency is almost criminal.” (p301)

It also covers many important concepts like: “G34: Functions Should Descend Only One Level of Abstraction”, The Law of Demeter, Data/Object anti-symmetry, the DRY principle, the importance of naming, and more.

 

Read more from the Computers and Technology category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

Dataclysm

Written on March 8, 2015. Written by .

Although this book wasn’t particularly entertaining, it did have quite a few impressive data-backed observations of human psychology that I’ve never seen before. It had a lot more content than the couple of blog posts I saw from the author, so it was worth the read.

Read more from the Psychology category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

Deep Value

Written on February 28, 2015. Written by .

The thesis of this book is that the worst quality stocks have the best expected returns, as a portfolio. Even the these low-quality stocks have a much higher probability of going bankrupt individually, they also have a much higher probability of shooting up in price. So it sounds like a classic high volatility situation where higher risk earns you higher returns. But the author’s analysis indicates that as a portfolio, the lowest quality stocks actually have lower volatility on long time scales of several years. He recommends the acquirer’s multiple as a way to filter for value stocks. The book is also full of stories about activist investors, which isn’t very practical for most investors, but makes the book more enjoyable.

Read more from the Economics and Finance category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

The Last Firewall

Written on January 4, 2015. Written by .

Although I really like the author’s Avogadro Corp and this book had even better reviews, I was overall bored with the book. It is basically a Sci-Fi action book and I personally don’t like action books. The plot revolves around AI and a population that has neural implants to interface with the internet. AI generally behaves well and seems to be under control, but one AI has evil plans. A young girl with a uniquely powerful implant fights back. Most of the story is about the girl on the run from the law, confused by everything that is going on, and getting into dangerous gunfights. For me, the Sci-Fi aspect just wasn’t compelling enough and it mostly felt like an action novel with a little Sci-Fi slant.

Read more from the Science Fiction category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

Using Namecoin for Identity

Written on December 28, 2014. Written by .

On the internet, it can be a challenge to know if you are connecting with who you think you are and not some impostor. Many websites don’t care who they are talking to in particular, as long as one person can’t masquerade as another person, so they ask each user to choose a password and knowledge of the password is sufficient to establish their identity for the purposes of the site. But sometimes you need to make sure you are communicating with the right person.

The “right” person is the person you want to connect with, which you must know through some other connection. For example, you might know them by name, their face, and their city of residence. Or maybe you only met them through Twitter and you don’t know anything else about them besides their Twitter username. Whatever the nature of the connection you have with this person, it provides a type of identification (or combination of types). The challenge is essentially to securely link all the various types of identification a person has so that others can connect to them through other channels besides the one that they originally established a connection through.

For example, I may have received several helpful pull requests from a user on Github over a period of time and I want to send them some Bitcoin to thank them. I could send them a message and ask for their Bitcoin address, but this solution isn’t perfect. First of all, it requires additional effort for both parties, but also from a security standpoint, there is a risk that someone working at Github could swap the Bitcoin address for one that they own and I might end up sending the bitcoins to the wrong person. That might sound a bit paranoid, but this is a very general problem and isn’t limited to such casual interactions. It would be nice to have a highly secure solution.

Now I don’t care who exactly this Github user is. It doesn’t matter where they live or what their real name is; all that matters is that my bitcoin payment goes to the same person who gave me the pull requests. Namecoin provides a great solution to this.

Suppose that the Github user has a Namecoin ID with all of their identities listed in the record. The record contains a link to their Github account, their Bitcoin address, their email address, their PGP public key, their Twitter username, their Facebook username, a link to their homepage, etc. And also suppose that they put their Namecoin ID in their profile on all of these sites.

Now it’s easy for me to lookup their Namecoin ID in the Namecoin blockchain and find their Bitcoin address. An attacker at Github could still substitute the Namecoin ID in the user’s profile and create a fake Namecoin record with a different Bitcoin address though. To mitigate this attack, I could use special Namecoin identity software that scans all Namecoin records for duplicate links. If it detects two records that both link to the same Github username, I can postpone the payment until I figure out which one is correct. If the user had not yet made a Namecoin account and the Github employee attacker made it look like they did, then I could still be tricked, but that attack becomes obsolete if everyone uses the system.

Even in this early adoption phase where spoofing is possible, it would be much more difficult because an attacker would have to spoof multiple channels at the same time (an identity would not be considered secure if it only had one or two links in the Namecoin record). For example, lets say the Github employee attacker changed the user’s profile to show a fake Namecoin ID and made the correspond Namecoin record with a link back to the user’s profile. In addition, the record shows a Twitter username. If it is the user’s real Twitter account, then it won’t contain a backlink to this Namecoin ID, so it won’t be considered valid by the identity software, which checks backlinks automatically. If they use a fake Twitter account, then the identity software might detect that the account was created recently or has very little activity or looks suspicious or has been flagged by other users. Even if the fake Twitter account passes the automatic filters, I could check it myself to be careful and potentially notice that it doesn’t correspond to the user that I was looking for. Again, this attack would be automatically thwarted if the user was already using this system.

This is also useful for things like PGP key exchange because typically people have some long-lasting online relationship with their acquaintances, such as through Facebook and email, which would be very hard for an attacker to fake. 

Read more from the Technology category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

Fermi’s Paradox

Written on December 28, 2014. Written by .

Wikipedia describes Fermi’s Paradox as follows:

“The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.”

Scientists estimate that out of the roughly 200 billion star systems in our Galaxy, there are roughly 40 billion potentially habitable planets in the “goldilocks zone” and 8.8 billion of them around sun-like stars, the rest belonging to red dwarfs [1]. This figure doesn’t even include all the smaller moons that could potentially support life. Even if the probability of life forming on any given planet is low, that is a lot of chances. It’s hard to guess how many of those planets have life and how many have intelligent life, but it would be clearly be unreasonable to confidently assume that extraterrestrial life does not exist in our galaxy. And given that there are at least a hundred billion galaxies in the observable universe, and many more beyond the observable universe, it seems quite likely that there are lots of worlds with intelligent life.

So why do we have no evidence of extraterrestrial life? Life outside our galaxy would be too hard to detect, but if there were life in our galaxy, it might be detectable. For example, it would only take 5-50 million years to colonize the galaxy using “slow” technologies that we already have access to. If an advanced civilization existed over 50 million years ago, it could have colonized every planet in the galaxy by now, including earth. But it’s not obvious that it would do that. In fact it seems more likely that it would not colonize the galaxy because expansion is costly an unnecessary. Humans expanded to cover most of the earth, but only because the population was growing and expanding provided easier access to resources. An advanced civilization might realize that population control is preferable to expansion because expansion out of the solar system would mean communication latency on the order of years, effectively disconnecting them from the internet of their home world.

If they were sufficiently advanced, it would probably be better to build more planets in their home system to keep the communication latency low. After consuming all the matter in their home system, they would then have to either stop growing, mine other systems for matter/energy, or expand. Again, the first two seem like better options than expansion. If they do start mining, it would certainly slow their growth because they still have to obey the laws of physics. But eventually they would reach a point where the energy cost of mining is greater than the total energy in the matter they are mining, discounted for the time it takes. Getting $2 in exchange for $1 is only a good deal if you get the $2 in the near future; if it takes 20 years then you could have done better with other investments. At this point, mining would stop if they are acting rationally, so they might stagnate from a growth perspective, while still improving their home system internally. Perhaps this would result in a Dyson sphere or Matrioshka brain where the civilization could live in a virtual reality, making physical expansion unnecessary.

Expansion of humans over the surface of the earth is not a good analogy for expansion of advanced civilizations across the galaxy because advanced civilizations will likely care more about communications and be able to utilize space more effectively with computers.

But if a highly advanced civilization existed, wouldn’t it want to at least send out probes to explore the galaxy? It seems prudent to know what other life exists to make sure that it doesn’t come to destroy you. It would also make sense to make the probes stealthy because if one gets captured it would take a very long time to replace it. There could be a tiny probe in our solar system monitoring radio broadcasts and forwarding them to its home system using perfectly targeted beams like a laser. The probe itself wouldn’t make contact with us or it would risk being captured.

But once the extraterrestrials got the message, they might make contact with us. But even if they did, it might be a while before we hear from them. The earth has only been broadcasting radio waves for about 100 years. And those radio waves rapidly lose their strength the further they travel. The Arecibo radio telescope would only be able to detect such radio broadcasts if they were within 0.3 light years and the nearest star is 4.2 light years away [1]. Beyond that, the signal would get lost in the noise. However, if there was already a probe here, then it could forward the signal directionally to avoid this issue. Assuming that the probe was present, there would have been enough time for a round-trip to a star up to 50 light years away. But there are only about 2000 stars within 50 light years [2], which is just 0.000001% of the stars in the galaxy. It is quite likely that any extraterrestrial civilizations haven’t heard about us yet because we are so new as a technological civilization.

And for the same reasons as above, we will probably have a hard time detecting any signals that they create, unless they are aimed directly at us. Even if they were trying to make contact (and it’s not clear that they would want to), it would seem like a big waste of energy to send beams to every star system in the galaxy too frequently. We certainly aren’t doing it, Earth has only sent a few interstellar messages and they were short in duration [3]. Assuming a message did get beamed directly to earth, there is a very high probability that we would miss it [4].

Overall, it’s not too surprising that we don’t yet have evidence for extraterrestrial intelligence; the vast scale of the galaxy, the speed of light, and lack of alignment with incentives make it difficult. 

[1] http://www.nbcnews.com/science/space/8-8-billion-habitable-earth-size-planets-exist-milky-way-f8C11529186

[2] http://www.atlasoftheuniverse.com/50lys.html

[3] http://en.wikipedia.org/wiki/List_of_interstellar_radio_messages

[4] http://www.setileague.org/askdr/howmuch.htm

Read more from the Philosophy category. If you would like to leave a comment, click here: 2 Comments. or stay up to date with this post via RSS from your site.

Zero to One

Written on December 27, 2014. Written by .

This book has really good reviews, but I think it is mostly due to “success bias”: people have a tendency to overrate anything associated with successful people. There were very few original ideas in this book, if any. The main thesis is that we live in a time of technological stagnation and that entrepreneurs should look for businesses that do something new instead of iterating on existing business models. The technological stagnation assertion seems a bit absurd. He says that we haven’t seen much progress outside of computers and communications since 1970. The progress in computers and communications has been massive, and these are huge categories, so he’s basically saying “if you ignore all the amazing progress that we’ve made, we haven’t made much progress.” But even then, there has been a lot of progress outside of those two categories. And the idea that businesses should focus on totally new ideas is very vague and mostly meaningless. Google wasn’t the first search engine, Facebook wasn’t the first social network, Amazon wasn’t the first online bookstore, much less the first bookstore. Most successful tech startups are better described as iterating on existing technologies than doing something completely new. It’s hard to even think of examples of things that are completely new because everything builds on something else in some way. He has some good points, like the best businesses are monopolies (but that is just repeating Warren Buffet), and products don’t sell themselves (but that is more thoroughly explained by Crossing the Chasm).  He also makes wild claims without any justification like computers don’t compete with humans and AI won’t be a problem until next century. Also, I wasn’t too impressed with his character after reading that he made a blanket rule to never invest in people who dressed in suits for pitches. I would have hoped that the individuals with the power to create or destroy businesses would have a slightly more rational and discerning decision process. 

Read more from the Business and Marketing category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

The 22 Immutable Laws of Marketing

Written on July 20, 2014. Written by .

Overview of the basic concepts of marketing. The laws are simple, but they aren’t always obvious. The emphasizes that successful marketing is more about appealing to the customer’s psychology than their rational decision making process.

Read more from the Business and Marketing category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.

© Copyright thrive by design - Powered by Wordpress - Designed by Speckyboy