The Last Firewall

Written on January 4, 2015. Written by .

Although I really like the author’s Avogadro Corp and this book had even better reviews, I was overall bored with the book. It is basically a Sci-Fi action book and I personally don’t like action books. The plot revolves around AI and a population that has neural implants to interface with the internet. AI generally behaves well and seems to be under control, but one AI has evil plans. A young girl with a uniquely powerful implant fights back. Most of the story is about the girl on the run from the law, confused by everything that is going on, and getting into dangerous gunfights. For me, the Sci-Fi aspect just wasn’t compelling enough and it mostly felt like an action novel with a little Sci-Fi slant.

★★★★★

Read more from the Science Fiction category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

Using Namecoin for Identity

Written on December 28, 2014. Written by .

On the internet, it can be a challenge to know if you are connecting with who you think you are and not some impostor. Many websites don’t care who they are talking to in particular, as long as one person can’t masquerade as another person, so they ask each user to choose a password and knowledge of the password is sufficient to establish their identity for the purposes of the site. But sometimes you need to make sure you are communicating with the right person.

The “right” person is the person you want to connect with, which you must know through some other connection. For example, you might know them by name, their face, and their city of residence. Or maybe you only met them through Twitter and you don’t know anything else about them besides their Twitter username. Whatever the nature of the connection you have with this person, it provides a type of identification (or combination of types). The challenge is essentially to securely link all the various types of identification a person has so that others can connect to them through other channels besides the one that they originally established a connection through.

For example, I may have received several helpful pull requests from a user on Github over a period of time and I want to send them some Bitcoin to thank them. I could send them a message and ask for their Bitcoin address, but this solution isn’t perfect. First of all, it requires additional effort for both parties, but also from a security standpoint, there is a risk that someone working at Github could swap the Bitcoin address for one that they own and I might end up sending the bitcoins to the wrong person. That might sound a bit paranoid, but this is a very general problem and isn’t limited to such casual interactions. It would be nice to have a highly secure solution.

Now I don’t care who exactly this Github user is. It doesn’t matter where they live or what their real name is; all that matters is that my bitcoin payment goes to the same person who gave me the pull requests. Namecoin provides a great solution to this.

Suppose that the Github user has a Namecoin ID with all of their identities listed in the record. The record contains a link to their Github account, their Bitcoin address, their email address, their PGP public key, their Twitter username, their Facebook username, a link to their homepage, etc. And also suppose that they put their Namecoin ID in their profile on all of these sites.

Now it’s easy for me to lookup their Namecoin ID in the Namecoin blockchain and find their Bitcoin address. An attacker at Github could still substitute the Namecoin ID in the user’s profile and create a fake Namecoin record with a different Bitcoin address though. To mitigate this attack, I could use special Namecoin identity software that scans all Namecoin records for duplicate links. If it detects two records that both link to the same Github username, I can postpone the payment until I figure out which one is correct. If the user had not yet made a Namecoin account and the Github employee attacker made it look like they did, then I could still be tricked, but that attack becomes obsolete if everyone uses the system.

Even in this early adoption phase where spoofing is possible, it would be much more difficult because an attacker would have to spoof multiple channels at the same time (an identity would not be considered secure if it only had one or two links in the Namecoin record). For example, lets say the Github employee attacker changed the user’s profile to show a fake Namecoin ID and made the correspond Namecoin record with a link back to the user’s profile. In addition, the record shows a Twitter username. If it is the user’s real Twitter account, then it won’t contain a backlink to this Namecoin ID, so it won’t be considered valid by the identity software, which checks backlinks automatically. If they use a fake Twitter account, then the identity software might detect that the account was created recently or has very little activity or looks suspicious or has been flagged by other users. Even if the fake Twitter account passes the automatic filters, I could check it myself to be careful and potentially notice that it doesn’t correspond to the user that I was looking for. Again, this attack would be automatically thwarted if the user was already using this system.

This is also useful for things like PGP key exchange because typically people have some long-lasting online relationship with their acquaintances, such as through Facebook and email, which would be very hard for an attacker to fake. 

Read more from the Technology category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

Fermi’s Paradox

Written on December 28, 2014. Written by .

Wikipedia describes Fermi’s Paradox as follows:

“The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.”

Scientists estimate that out of the roughly 200 billion star systems in our Galaxy, there are roughly 40 billion potentially habitable planets in the “goldilocks zone” and 8.8 billion of them around sun-like stars, the rest belonging to red dwarfs [1]. This figure doesn’t even include all the smaller moons that could potentially support life. Even if the probability of life forming on any given planet is low, that is a lot of chances. It’s hard to guess how many of those planets have life and how many have intelligent life, but it would be clearly be unreasonable to confidently assume that extraterrestrial life does not exist in our galaxy. And given that there are at least a hundred billion galaxies in the observable universe, and many more beyond the observable universe, it seems quite likely that there are lots of worlds with intelligent life.

So why do we have no evidence of extraterrestrial life? Life outside our galaxy would be too hard to detect, but if there were life in our galaxy, it might be detectable. For example, it would only take 5-50 million years to colonize the galaxy using “slow” technologies that we already have access to. If an advanced civilization existed over 50 million years ago, it could have colonized every planet in the galaxy by now, including earth. But it’s not obvious that it would do that. In fact it seems more likely that it would not colonize the galaxy because expansion is costly an unnecessary. Humans expanded to cover most of the earth, but only because the population was growing and expanding provided easier access to resources. An advanced civilization might realize that population control is preferable to expansion because expansion out of the solar system would mean communication latency on the order of years, effectively disconnecting them from the internet of their home world.

If they were sufficiently advanced, it would probably be better to build more planets in their home system to keep the communication latency low. After consuming all the matter in their home system, they would then have to either stop growing, mine other systems for matter/energy, or expand. Again, the first two seem like better options than expansion. If they do start mining, it would certainly slow their growth because they still have to obey the laws of physics. But eventually they would reach a point where the energy cost of mining is greater than the total energy in the matter they are mining, discounted for the time it takes. Getting $2 in exchange for $1 is only a good deal if you get the $2 in the near future; if it takes 20 years then you could have done better with other investments. At this point, mining would stop if they are acting rationally, so they might stagnate from a growth perspective, while still improving their home system internally. Perhaps this would result in a Dyson sphere or Matrioshka brain where the civilization could live in a virtual reality, making physical expansion unnecessary.

Expansion of humans over the surface of the earth is not a good analogy for expansion of advanced civilizations across the galaxy because advanced civilizations will likely care more about communications and be able to utilize space more effectively with computers.

But if a highly advanced civilization existed, wouldn’t it want to at least send out probes to explore the galaxy? It seems prudent to know what other life exists to make sure that it doesn’t come to destroy you. It would also make sense to make the probes stealthy because if one gets captured it would take a very long time to replace it. There could be a tiny probe in our solar system monitoring radio broadcasts and forwarding them to its home system using perfectly targeted beams like a laser. The probe itself wouldn’t make contact with us or it would risk being captured.

But once the extraterrestrials got the message, they might make contact with us. But even if they did, it might be a while before we hear from them. The earth has only been broadcasting radio waves for about 100 years. And those radio waves rapidly lose their strength the further they travel. The Arecibo radio telescope would only be able to detect such radio broadcasts if they were within 0.3 light years and the nearest star is 4.2 light years away [1]. Beyond that, the signal would get lost in the noise. However, if there was already a probe here, then it could forward the signal directionally to avoid this issue. Assuming that the probe was present, there would have been enough time for a round-trip to a star up to 50 light years away. But there are only about 2000 stars within 50 light years [2], which is just 0.000001% of the stars in the galaxy. It is quite likely that any extraterrestrial civilizations haven’t heard about us yet because we are so new as a technological civilization.

And for the same reasons as above, we will probably have a hard time detecting any signals that they create, unless they are aimed directly at us. Even if they were trying to make contact (and it’s not clear that they would want to), it would seem like a big waste of energy to send beams to every star system in the galaxy too frequently. We certainly aren’t doing it, Earth has only sent a few interstellar messages and they were short in duration [3]. Assuming a message did get beamed directly to earth, there is a very high probability that we would miss it [4].

Overall, it’s not too surprising that we don’t yet have evidence for extraterrestrial intelligence; the vast scale of the galaxy, the speed of light, and lack of alignment with incentives make it difficult. 

[1] http://www.nbcnews.com/science/space/8-8-billion-habitable-earth-size-planets-exist-milky-way-f8C11529186

[2] http://www.atlasoftheuniverse.com/50lys.html

[3] http://en.wikipedia.org/wiki/List_of_interstellar_radio_messages

[4] http://www.setileague.org/askdr/howmuch.htm

Read more from the Philosophy category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

Zero to One

Written on December 27, 2014. Written by .

This book has really good reviews, but I think it is mostly due to “success bias”: people have a tendency to overrate anything associated with successful people. There were very few original ideas in this book, if any. The main thesis is that we live in a time of technological stagnation and that entrepreneurs should look for businesses that do something new instead of iterating on existing business models. The technological stagnation assertion seems a bit absurd. He says that we haven’t seen much progress outside of computers and communications since 1970. The progress in computers and communications has been massive, and these are huge categories, so he’s basically saying “if you ignore all the amazing progress that we’ve made, we haven’t made much progress.” But even then, there has been a lot of progress outside of those two categories. And the idea that businesses should focus on totally new ideas is very vague and mostly meaningless. Google wasn’t the first search engine, Facebook wasn’t the first social network, Amazon wasn’t the first online bookstore, much less the first bookstore. Most successful tech startups are better described as iterating on existing technologies than doing something completely new. It’s hard to even think of examples of things that are completely new because everything builds on something else in some way. He has some good points, like the best businesses are monopolies (but that is just repeating Warren Buffet), and products don’t sell themselves (but that is more thoroughly explained by Crossing the Chasm).  He also makes wild claims without any justification like computers don’t compete with humans and AI won’t be a problem until next century. Also, I wasn’t too impressed with his character after reading that he made a blanket rule to never invest in people who dressed in suits for pitches. I would have hoped that the individuals with the power to create or destroy businesses would have a slightly more rational and discerning decision process. 

★★★★★

Read more from the Business and Marketing category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

The 22 Immutable Laws of Marketing

Written on July 20, 2014. Written by .

Overview of the basic concepts of marketing. The laws are simple, but they aren’t always obvious. The emphasizes that successful marketing is more about appealing to the customer’s psychology than their rational decision making process.

★★★★

Read more from the Business and Marketing category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

Crossing the Chasm

Written on June 30, 2014. Written by .

This book provides a great mental model for how high-tech marketing works. It presents a technology adoption bell curve divided into customer groups based on how early they will adopt a new technology. The groups correspond to innovators who try new technologies for fun, early adopters (or visionaries) who are the first to find a real practical use for the product, the early majority (or pragmatists) who will adopt once a new technology has been proven by some of their peers, the late majority who wait to adopt until it everyone else already has, and the laggards who resist adoption for as long as possible. The Chasm refers to the gap between the early adopters and the early majority, which is a difficult transition. One of the key points is that most customers are looking to other customers for purchasing recommendations, which creates a chicken and egg problem that is best solved by appealing the the innovators. However, innovators don’t usually like to pay for technology, so you may have to wait until you get to the early adopters before the money starts coming in. This mental model applies to technologies that the author refers to as “discontinuous innovation”, which means a technology that requires a significant change of behavior for the customer.

★★★★★

Read more from the Business and Marketing category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

The Black Swan

Written on June 18, 2014. Written by .

This book is about the idea that people over-rely on the gaussian distribution and that big events happen much more often than a gaussian distribution would predict. It’s an important concept, but there wasn’t much useful information in the book.

★★★★★

Read more from the Economics and Finance category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

Manna: Two Visions of Humanity’s Future

Written on June 18, 2014. Written by .

I really like how this book addressed the issue of technological unemployment. It depicts a future in which an artificial intelligence called “Manna” is used to direct low-paid employees through headsets to optimize their performance. It would give step-by-step instructions to employees for how to cook a hamburger, take out the trash, or restock the shelves. This eventually leads to a fairly dystopian society where most people are on welfare. On the other hand, Australia ends up being a robot-managed co-op that is depicted as a utopia. I think it might be difficult to setup such a large co-op system, but if it could be setup it might be a better alternative. Even if a co-op were established, it would likely rapidly fall behind the pure capitalist economies because capitalism is more economically efficient, despite the fact that a large portion of the population could be left behind. So there were some aspects that were a bit implausible, but the technological unemployment example was very interesting.

★★★★

Read more from the Science Fiction category. If you would like to leave a comment, click here: Comment. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

Net Neutrality vs Freedom

Written on May 15, 2014. Written by .

In my last post, I discussed some problems with the general concept of net neutrality, but in this post I want to discuss net neutrality more concretely and in the sense that most people are talking about online.

isp

The general assumption is that every user of the internet, whether they are a content provider or a consumer, pays for their connection to the internet at a certain service tier, and that covers all their network costs. This is a nice simple model. For simplicity, we are ignoring the costs associated with the transmission through the internet, which is bundled into the costs each user pays.

A packet traveling between X and A will be prioritized according to X’s service tier between X’s server and the internet, and according to A’s service tier between the internet and A’s house.

Net neutrality suggests that because the whole link has been paid for by one party or another, it doesn’t make sense to “double dip” by charging both X and A for service over the same part of the network, namely the ISP’s part of the network.

Now in a sense, both X and A are users of the ISP’s network. Every packet on the internet has a source and a destination, and both the source and destination benefit from the connection. Similarly, cell phone companies in the US charge by the minute for both incoming and outgoing calls. This could be considered double-dipping, but since the situation is already so symmetrical (both parties talk and listen), it actually seems more fair than just charging the caller as some other countries do.

So even though it may be A who is watching a movie on Netflix (server X), Netflix is simultaneously leveraging the ISP’s infrastructure to run their business. But customer A already paid for internet access of a certain tier, so a net neutrality advocate would suggest that customer A has already effectively paid on Netflix’s behalf.

The question here is “Has customer A actually paid the total fee for using the ISP’s network on behalf of X, Y, and all other remote sites?”. If A’s contract with the ISP states that the service is “remote party neutral”, then they have paid for it and it would be a contract violation to filter based on the remote party. In this case, there is no need for net neutrality regulations because standard contract law already guarantees that the ISP can’t prioritize connections to X over connections to Y. However, if the contract does not stipulate this, then it may be the case that A hasn’t actually paid for such a privilege. The ISP may have offered a lower rate subject to certain restrictions.

Now why would the ISP even bother to offer a restricted plan? One possibility is that they are also a cable TV company and they want to block Netflix because Netflix is costing them cable subscribers. I know a lot of people get upset by this, but if you think about it, that should be their right. If you do the work of laying down cables and launching an ISP, you should be able to do whatever you want with it, even run your business into the ground with poor service. Sure, it sounds like it will be bad for consumers, but sometimes it’s good to give a company enough rope to hang itself because that will make it easier for competitors to enter the market. If you force a company to be just good enough to deter competitors from entering the market, you may actually be helping a bad company retain their monopoly.

But realistically, I don’t think even bad ISP companies are that dumb. The reason why they are interested in charging content providers is because their current business model is economically unstable. By unstable, I mean that there are alternative business strategies that would outcompete them if implemented. This concept is the business analogue of the “evolutionarily stable strategy (ESS)”. It is unstable because some customers are getting a better deal than others. People are paying for download rates, but they are not paying based on the quantity that they download. So those customers who download a larger total quantity of data are getting more service for the same price as other users on the same tier. If a competitor ISP charges people for what they actually use, the lower utilizing customers will switch to the competitor for the lower prices and the original ISP will have to raise its prices for the higher utilizing customers because the lower utilizing customers are no longer subsidizing them. So in this case the new ISP gets all the lower utilizing customers and half the higher utilizing customers (since both ISPs end up with the same price for them). That means that the new ISP gets more customers and outcompetes the old ISP.

Now probably the simplest option is for the ISP to just add a per-GB fee to their pricing structure. That would be an economically stable strategy from a pricing standpoint. But maybe the ISP assumes that people just don’t like per-GB fees, perhaps due to their market research. So then the ISP notices that their users fall fairly neatly into two categories: those who use streaming video services and those who don’t. The ISP realizes that they can effectively charge per GB without adding per-GB fees by charging a flat rate that is higher for customers who use streaming video. This strategy mimics the economically stable strategy without upsetting customers over per-GB fees. It is a business optimization, created in response to market pressures.

What about startups? If content providers have to pay every ISP in the country for fast-lane access to users, won’t that make it expensive to start a new streaming video service? Well, it already is quite expensive to start bandwidth-intensive internet companies. There are always barriers to entry in a business; that’s no reason to make laws that arbitrarily threaten other businesses. A startup might have to rollout their services one ISP at a time if they don’t have enough funding to cut deals with all ISPs at once. And really that is the worst case scenario. More likely the ISP will only be throttling the big names like Amazon and Netflix, which would actually provide a competitive advantage to startups because they would get fast-lane access without paying the fees that the big names pay.

And we haven’t even considered the unintended consequences of net neutrality regulations yet. For example, if a group of hackers launched a cyber-attack on one of the ISP’s customers, flooding the ISP’s pipes with packets and bringing the ISP’s services to a crawl, are we saying that the ISP should be legally forbidden from doing anything to fix the situation? That’s what straight-up net neutrality would suggest. Even if you wrote an exception for cyber-attacks into the legislation, what happens when a legitimate service comes out that effectively does the same thing? Perhaps a service that pings every computer on the internet many times per second, like a search engine spider, except that it indexes all nodes on the internet instead of just web servers. The customer isn’t requesting that traffic at all, so does it still make sense to say that this traffic is already paid for by the customer, or is it a form of freeloading by the remote service? You can probably work around this issue by making a net neutrality exception for anything that the customer didn’t request, but now it’s getting more complicated because you have to define what that means and it goes to show that making laws can cause problems you didn’t anticipate.

The bottom line is that businesses should be free to do business and to try any optimization they want as long as they aren’t committing fraud or violence. If they provide terrible service, the free market will fix the problem. Monopolies almost never last long and they only exist at all if they are doing a decent job and/or the government is helping them. And yes, governments are helping monopolies by making permitting processes difficult and rights-of-way access expensive. Piling on more government legislation (in the form of net neutrality regulations) is not the solution to having too much government regulation. The solution is to fix the problem at its core, reduce regulatory barriers, and encourage competition. I’m confident that we’ll retain remote-neutral internet access if that’s what the people really want. Promote freedom, not net neutrality.

Icons in the image are from the Oxygen Icon Pack, except the server which is from Itzik Gur.

Read more from the Economics category. If you would like to leave a comment, click here: 2 Comments. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

Net Neutrality is Wrong

Written on May 10, 2014. Written by .

EDIT 5-15-14: If you believe that I’m using the wrong definition of “net neutrality”, see this comment.

Net neutrality is factually, practically, and ethically wrong.

I’ve never disagreed with the ACLU before, but if there is one thing that I’ve learned it’s that every organization has some kind of unfair bias to help unify it.

Net neutrality is factually wrong because it doesn’t exist. Every ISP I’ve ever used has offered multiple pricing tiers with different upload and download speeds. That means some users will have more bandwidth than others, violating the idea of net neutrality (see the Wikipedia definition).

Net neutrality is practically wrong because enforcing it makes the internet slower and less efficient than it would be otherwise. If ISPs can charge more for high-bandwidth connections, they will have the financial resources and incentives to upgrade their infrastructure to support them.

Net neutrality is ethically wrong because it means that low-bandwidth sites have to pay more to subsidize the costs of high-bandwidth sites. And it means that some people’s naive opinions on how ISPs should operate their businesses are imposed on ISPs by force instead of by free market pressures.

Ironically, the phrase “free and open internet” has been used frequently in support of net neutrality regulations, but imposing a law is the opposite of freedom. The correct way to “vote” for what you want is to vote with your money and only pay for services that you support. There is no need to fear the loss of net neutrality because the free market, consisting of all of our money-votes, will enforce the right level of net non-neutrality.

Read more from the Economics category. If you would like to leave a comment, click here: 4 Comments. or stay up to date with this post via RSS from your site.
Social Bookmark : Technorati, Digg, de.licio.us, Yahoo, Blinkbits, Blogmarks, Google, Magnolia.

© Copyright thrive by design - Powered by Wordpress - Designed by Speckyboy