You are here

Albert Wenger

Content Written by Author

Wednesday, January 17, 2018 - 11:30am

Towards the end of last year in Uncertainty Wednesday, I wrote a post about suppressed volatility and gave an example. I ended the write up with:

if we simply estimate the volatility of a process from the observed sample variance, we may be wildly underestimating potential future variance 

This turns out to be true not just for cases of “suppressed volatility” but much more broadly. For any fat tailed distribution, the sample variance will underestimate the true variance. Mistaking the sample variance for the actual variance is the same error as mistaking the sample mean for the actual mean. The sample mean has a distribution and the sample variance has a distribution. Whether or not they are an unbiased estimator for the true values depends on the characteristics of the process.

Consider objects colliding with earth. Small objects strike earth with relatively high frequency. But how should we use a sample? The article from NASA says:

The new data could help scientists better refine estimates of the distribution of the sizes of NEOs [Near Earth Objects] including larger ones that could pose a danger to Earth

That will only work well if we take into account that we know that over longer time periods there have been much more massive impacts although these are often millions of years apart. This is the hallmark of a fat tail distribution: rare large outlier events. Naively using a sample that does not include these large strikes would give us a dramatic under-estimate of the true danger for humanity.

Next week we will look more at what this means (including other examples) and what we can do about coming up with estimates in these situations.

Monday, January 15, 2018 - 11:30am

We have the USV office closed today in honor of Martin Luther King Jr. day. In his book “Where Do We Go From Here: Chaos or Community” he wrote:

In addition to the absence of coordination and sufficiency, the [social] programs of the past all have another common failing — they are indirect. Each seeks to solve poverty by first solving something else.

I’m now convinced that the simplest approach will prove to be the most effective — the solution to poverty is to abolish it directly by a now widely discussed measure: the guaranteed income.

I strongly recommend reading a longer excerpt from the book, which shows just how visionary MLK was. He wrote the book in 1967, the year that I was born. Since then we have made extraordinary progress in the productive capacity of the economy. Put differently, we can afford a basic income more easily than ever before.

So on this MLK day in 2018, if you are looking for things to do, read up on basic income. Here are some places to get started:

Utopia for Realists by Rutger Bregman
Raising the Floor by Andy Stern
Basic Income FAQ by Scott Santens

And there is a section on basic income in my book World After Capital. If you want to watch a video instead, you can check out my GEL talk or Rutger’s TED talk.

Friday, January 12, 2018 - 5:05pm

It happened on the second to last day of a wonderful family trip to Southeast Asia. I looked at my mobile phone in the morning and it had an alert that said something like “SIM not recognized.” I should have probably figured out something was remiss right then, but instead I simply assumed that my phone had tried to register on an unsupported network. As I was sitting down for breakfast suddenly 3 emails arrived in rapid succession that made me realize I was being hacked (time order is from bottom to top):

image

Argh! Clearly someone had gotten my SMS messages to go to them instead and used it to hack my old yahoo email account. They quickly changed the password to lock me out and removed my alternate email.

From there I figured their next stop would be Twitter. That’s one of the few services where I used that email address. I ran back to my hotel room and tried to change my email address on my logged in Twitter account. Alas I was too late. The attacker already had reset the password and I was logged out.

The attacker then made a single tweet (as I later discovered also one rude reply) and pinned it:

image

Immediately people started remarking that this didn’t sound at all like me and that I had probably been hacked. Several people also texted me, but obviously those texts went to the attacker’s phone!

Thankfully the team at USV immediately jumped into action. They replied to the tweet and others who were quoting it that my account had been hacked. They helped me contact Twitter and have my account suspended and the tweet removed (which happened quite quickly but seemed like an eternity to me). In the meantime I got on the phone with T-Mobile to regain control of my phone.

I let T-Mobile know that someone had gotten into my account. They pretty quickly established that there had been a transfer of the SIM to a different SIM. I asked somewhat irately how that was possible given that I had a password on the account. I was told that someone had shown up at a T-Mobile store as me and presented a valid ID. I was able to convince the rep that this had not been me. Thankfully they could see that I was calling from Thailand and I was able to answer all the security questions and able to produce the number off the SIM card actually in my phone. From there it only took a few minutes to have the SIM switched back.

With the phone number once again in my control what remained was getting my Twitter and Yahoo accounts back. Thankfully I was able to get great connections to support at both companies and they got this done in record time.

What are the takeaways? First, my accounts that were protected with Google Authenticator were safe (the attacker did try to go after these but without success). Second, someone went to fairly great lengths to get the SIM on my phone switched. This is all the more surprising given the fairly obvious tweet they sent. 

So: SMS based 2FA is vulnerable (which is well known) if someone either ports your number outright or, more likely, can switch your SIM. I am pretty sure that T-Mobile will not switch my SIM again. Nonetheless, wherever possible I will now make sure to use a different second factor.

Thursday, January 4, 2018 - 7:30am

So I am still away on family vacation and following a self-imposed online diet, but even then it has been impossible to ignore the monster sized vulnerabilities disclosed today known as Meltdown and Spectre. And just to make sure nobody misreads my post title, these are bad. Downright ugly. They are pervasive, exploitable and a real longterm fix will likely require new hardware (one or more extra hardware bits in the CPU). So how can I possibly claim they are good? Here are four different ways I think these vulnerabilities can give an important boost to innovation.

1. Faster Adoption of (Real) Cloud Computing

One might think that Meltdown and Spectre are terrible for cloud computing as they break through all memory isolation, so that an attacker can see everything that’s in memory on a physical machine (across all the virtual machines). But I believe the opposite will be true if you think of cloud computing as true utility computing for small code chunks as in AWS Lambda, Google Cloud Functions or MongoDB Stitch. This to me has always been the true promise of the cloud, not having to muck with virtual machines or installing libraries. I just want to pass some code to the cloud and have that run. Interesting historic tidbit. The term “utility computing” goes back to a speech given by John McCarthy in, wait for it,  1961. Well, we now finally have it and, properly implemented, it will be secure.

2. Improved Secure Code Execution in the Web Browser

Over the last few years the focus for Javascript execution in browsers has been speed. New browsers tout their speed and with WebAssembly (aka wasm) it became possible to run Unity games in the browser. But speed does nothing for me when I simply want to do a secure transaction, such as make a payment. For years, people had complained about how air travel speed had actually decreased with the retirement of the Concorde. While true, as a criticism it ignores that safety is another important innovation dimension. And with regard to aviation safety, there has been tremendous progress since the 1970s and 2017 was the safest year on record. We need a safety over speed alternative for at least some code on the web and these new vulnerabilities will help with that.

3.  More Secure Operating Systems

Not just the web browser, but operating systems as a whole will benefit from a renewed focus on security. Efforts such as Qubes and Copperhead, while – as far as I know – not immune from the current vulnerabilities, deserve more attention and funding. It may also be time for completely new approaches, although I would prefer something a little less abstruse than Urbit.

4. New Machine Architectures

The fundamentally enabling elements behind both Meltdown and Spectre are the extraordinary steps we have taken to optimize for speed in the Von Neumann  / Harvard architecture. Out of order execution and speculative execution increase utilization but turn out to give code access to memory that it shouldn’t be able to reach. Caches speed up memory access but also allow for side channels. Now we can make software and hardware changes to prevent this, but nonetheless one way to read these vulnerabilities is that we should stop pushing the existing architectures and work more on alternatives. This has of course to a degree already happened with the rise of GPUs and some neuromorphic hardware. And there has been a lot of recent investment into quantum computing. But there are lots of other interesting possibilities out there, such as Memristor based machines.

All in all then while Meltdown and Spectre are a huge bother in the short term, I believe that they will turn out to be good for computing innovation.

PS I highly recommend at least skimming the papers on Meltdown and Spectre, which are very accessible to anyone with a basic understanding of computer architecture.

Saturday, December 23, 2017 - 7:30am

As 2017 draws to a close I will be taking a break from blogging, twitter and (for the most part) email until well into January. I have a great set of book recommendations – far too many to get through for quite some time – and I will be traveling with Susan and our children. Given all the craziness of this year in politics, climate change, technology (crypto currencies) and more, I look forward to getting some distance from it all and spending time with family. I feel incredibly fortunate to be able to do this and wish everyone all the best for 2018. 

Thursday, December 21, 2017 - 5:05pm

In 2015 I wrote a blog post titled “Uber’s Greatest Trick Revealed” in which I argued that Uber’s success was the result of providing a transportation service. This was and is exactly what consumers wanted but is not a neutral platform or marketplace. It would appear that regulators have finally caught up with this with the European Union’s top court ruling yesterday that Uber is in fact a transportation service.

Now we will hopefully enter a new phase in which regulators figure out how to get consumers the benefits of on demand, app based dispatch, including the massive expansion of capacity, but still deal with issues such as safety, congestion, driver’s rights, etc. And yes some local regulators were and are captive to incumbent taxi companies but that doesn’t mean there are not enlightened ones to be found who will come up with the right rules that can then over time be emulated everywhere.

This has been the pattern of regulation for lots of innovation. Early in the history of cars, for instance, there were red flag laws aimed at preventing cars from going faster than horse drawn carriages. Ultimately though cars did not succeed against regulation but because of regulation. We only got to the benefits of individual transportation by having rules of the road and through government investment in roads.

The same will be true for autonomous vehicles and on demand hailing services. They will ultimately be successful *because* of regulation.

Thursday, December 21, 2017 - 7:30am

This will be the last Uncertainty Wednesday for 2017 as I am about to go away on vacation. In the last post I had introduced the idea that sometimes when volatility is suppressed it comes back to bite us. I wanted to have a really simple model for demonstrating that and so I wrote some Python code to make a 50:50 coin toss and depending on the result either increment or decrement a value by +/- 2 (I set the initial value to 100). Here is a plot of a sample run:

image

Now to suppress volatility, I modified the program so that it would increment or decrement the value by +/- 1 instead, i.e. half the original change. I then added the suppressed +/- 1 (the other half of the change) into a “buffer” – accumulating +1 in a positive buffer and -1 in a negative buffer. I then gave each buffer a 1 in 1,000 chance of being release. Here is a plot where the buffer is not released:

image

We can immediately see that there is less volatility. In fact, when we determine the sample variance, the sample in the first chart comes in at 178 whereas the sample in the second chart has a variance of only 42.  

Here by contrast is a plot from a run where the negative buffer gets released.

image

We can see that once again the chart starts out looking as if it has rather low volatility. But then all of a sudden all the suppressed down movements are being realized in one go and the value drops dramatically.

This super simple model provides an illustration of how suppressed volatility can easily fools us. Let us look at the sample variance in the three graphs for the first 300 data points (each graph has 1,000 data points). The variances are as follows. Chart 1: 139, Chart 2: 36, Chart 3: 26.

So the lesson here should be clear: if we simply estimate the volatility of a process from the observed sample variance, we may be wildly underestimating potential future variance when dealing with a case of suppressed volatility.

Monday, December 18, 2017 - 11:30am

We are about to go away on a three week trip. Long trips have been a wonderful aspect of homeschooling our children. We are fortunate to have the means to show them different parts of the world and this year we are going to Southeast Asia, which will also be new to Susan and myself. 

During our travels I also tend to detox from the Internet and find time to read. I already have a couple of books with me about the region we are going to, but am looking for a few more recommendations (I will also be reading this month’s USV book club selection, “The Sellout”).

I would love to hear from people: what is the best book you have read in the last 12 months and why?

Wednesday, December 13, 2017 - 11:30am

At our house we were all refreshing our computers furiously starting at 8pm with ever increasing excitement as the evening progressed. We were absolutely thrilled when Doug Jones victory was certain. If Doug Jones can win in Alabama, the state where Trump had his biggest victory, after Trump endorsed his opponent, well then Trump too can be defeated.

I have come to think of Trump’s candidacy and presidency as the last hurrah of a past that we need to leave behind for good. Trump, and the money behind him, waged a symbolic campaign of divisiveness that has continued with him in in office. It has been a mistake so far to try and defeat him with logical arguments. A while back I suggested that Take The Knee might be the right symbolic counter, but I was wrong about that, despite racism being one of the divisions Trump has exploited. 

It now appears that Trump’s real vulnerability might be the #MeToo movement. And I applaud Senator Gillibrand for pursuing that and calling for his resignation. Trump responded in the only way he seems to know how, by denying all responsibility and going on a ridiculous attack. Well, Roy Moore’s defeat, by however narrow a margin, shows that with enough pressure the time for evading responsibility through denying and attacking opponents is up.

All of us who see Trump as unfit to be the President of the United States should now do our part and apply the same pressure to him.

PS Uncertainty Wednesday will resume next week

Tuesday, December 12, 2017 - 11:30am

Later this week the FCC Commission is expected to vote along party lines to do away with net neutrality in the US. I have written extensively about why net neutrality is important for innovation. I will not rehash any of those arguments today (if you want to read from some of the pioneers who helped bring us the Internet, read their letter). Instead, today is simply a call to action. Contact your representatives and let them know why you support the existing net neutrality regulation.

Monday, December 11, 2017 - 11:35am

Planes fly differently from birds. Cars move differently from horses. And computers think differently from humans. That had always been my assumption but if you needed more proof, Google’s AlphaZero program, which had previously shown novel ways of playing Go, has just learned how to play chess incredibly well. It did so in 24 hours and without studying any prior games. Instead it just played games following the rules of the movement of pieces and learned from that.

I encourage everyone to check out the technical paper on Arxiv, as it contains many fascinating insights. But today I want to focus instead on a key high level implication: Computers are not constrained to learning and thinking like humans. And for many, many tasks that will give them at an extraordinary advantage over humans. Just like mechanized transport turned out to be superior to horses in almost all circumstances.

Our big brain that was shaped by the forces of evolution is a marvel in its complexity. But it has evolved to let us deal with a great many different scenarios and try as we might, we cannot apply more than a small fraction of our brain to a specific problem (such as playing chess). And even then our speed of learning is constrained by extremely slow clock cycles (see my previous post about AlphaGo).

AlphaZero’s success should be a startling wake up call. When we developed motorized transport, we went from 25 million employed horses in the US in 1915 to 3 million by 1960 and then we stopped tracking as the number fell further. We now have the technology to free ourselves from the ridiculous demands on humans to spend their lives as machines. We can have computers and robots carry out many of those tasks. We can be free to excel at those things that make us distinctly human, such as caring for each other.

But for that to happen, we must leave the Industrial Age behind and embrace what I call the Knowledge Age. AlphaZero can be the beginning of a great era for humanity, if we stop clinging to outdated ideas such as confusing human purpose with work or thinking every allocation problem can be solved by a market. These are the topics of my book “World After Capital,” which continues to become more relevant with breakthroughs such as AlphaZero. 

Saturday, December 9, 2017 - 7:30am

One of the objections against crypto currencies has been their volatility. Bitcoin for instance just rose by about 60% over 2 days only to then fall by about 15% in a matter of hours. Steam just ended bitcoin support citing volatility. This has led a number of teams on the quest to create a so called stable coin: A coin that does not fluctuate in value.

Now that raises some immediate questions. First, an easy question, relative to what is a stable coin stable? Other crypto currencies? The US dollar? The cost of some kind of computer operation? Second, a much harder one, if a coin were to be stable, do supply and demand become meaningless? And is that a good or bad thing? And third, possibly the hardest of them all, how in the world does one create a stable coin?

Here are some potential answers. The most desirable peg for a stable coin would be some kind of purchasing power index. That is a lot easier said than done especially when it comes to computation where cost has been coming down fast (easier for say the Big Mac Index). In the absence of a PPI, the second best would be a global currency basket.

The question about the effects of supply and demand on price though is a tough one. Prematurely stabilizing a coin could destroy the entire incentive effect for building out capacity. Take filecoin as an example. If there is a lot of demand for decentralized storage, one wants the price of filecoin to rise so as to provide an incentive for more storage capacity to be added to the network. Stabilizing such an increase away would effectively be suppressing the entire price signal! I have explained previously that a better approach to keeping speculative (rather than usage based) demand at bay is to have built-in inflation. So a stable coin makes more sense in places where the coin is simply replacing an existing payment mechanism.

As for a mechanism for creating a stable coin. Many of the ones I have looked at propose some kind of buy back mechanism to withdraw coins should the price per coin fall. I happen to believe that none of these account for the ruin problem (meaning you run out of funds for buying back). Given that a new stable coin would start out tiny relative to the size of the financial markets as a whole these could all be attacked (and an attack would make sense if the coin can be shorted). Leaving aside whether this can be done on an existing blockchain or not, I believe that a potentially better mechanism would be to randomly select coins for deletion (for contracting supply) and similarly randomly select coins for duplication (for increasing supply). While this does have wealth effects for individual holders, those should be small, random, and linear with size of holdings, thus minimizing incentive effects.

I am looking forward to feedback on these answers. And here are to more questions for readers:

A. Do you think a stable coin is needed?

B. If yes, what’s your favorite stable coin project (and why)?

Wednesday, December 6, 2017 - 11:30am

Last Uncertainty Wednesday provided a recap on our adventures with sample means and what those implied about the difficulties of inference. Now we will look at another equally fascinating complication: inferring volatility. As the title of this initial post gives away, we will see that it is easy to make large inference errors when we are dealing with situations in which volatility is somehow suppressed. It turns out such situations are all around us all the time. Let’s work our way into this one step at a time.

First of all, what is volatility? Here is a nice definition, courtesy of Wiktionary: “A quantification of the degree of uncertainty [about the future price of a commodity, share, or other financial product.]” I put the second half in brackets because while volatility is commonly used for financial assets, it could be about something else such as the level of employment in the economy. We have encountered several quantifications of the degree of uncertainty along the way, most notably entropy and variance.

What then might suppressed volatility be? Well if we are fragile, then increased volatility hurts us. So we tend to dislike volatility and look for ways of reducing it. Important aside: if we are “antifragile” then we benefit from increased volatility. The tricky part is that often the measures we take to reduce volatility wind up simply suppressing it. By that I mean it looks, for a while, as if volatility had been reduced but then it comes roaring back. The ways in which attempts to reduce volatility can backfire are among Nassim Taleb’s favorite topics.

The securitization of mortgages provides a great example of suppressed volatility. The basic idea is simple: throw a bunch of mortgages into a pool. Then carve the pool up into tranches of different volatility. Some with presumably very low volatility that looks like triple AAA rated bonds and others with high volatility like equity. It should be easy to infer from this description that total volatility has not been reduced it has just been parceled out.

So why am I calling this an example of suppressed volatility? Well, securitization of mortgages worked fantastically well for several decades. But as it did, people started to mistake the lower volatility of the bond tranches with lower volatility of real estate overall. And that meant more and more money started piling into real estate and as that happened banks got greedy. They underwrote more and more bad mortgage risks, making the pools increasingly risky. And yet for a while, because of securitization, it continued to look as if the the bond tranches had low volatility.

So what started out as a legitimate way of allocating volatility across different investors, turned into a case of massively increased and suppressed volatility that exploded in the 2007 financial crisis which has become known as the Great Recession.

Next Wednesday we will start to develop a simple model that lets us study suppressed volatility and see why it is so hard to detect. In general the take away will be that we should always be questioning anything that looks like a magical reduction in volatility. Most of the time it will be a case of suppressed volatility instead. In that regard the current super low volatility in financial markets, which has become known as the volatility paradox, should we worrisome for investors.

Wednesday, December 6, 2017 - 7:30am

[This is a talk I was going to give at Slush but had to change my travel plans.]

A 12 minute talk should be plenty to address this simple question. Just kidding. This is one of the profound questions that humanity has grappled with for a long time. Here are three artistic takes throughout history. The first is a biological take on being human. This plate from ancient Greece shows centaurs who are half human and half horse. Mythology is full of human animal hybrids. The centaur myth is likely to have arisen in civilizations that were invaded by other cultures that had domesticated horses. Let’s fast forward to the industrial age and a mechanical take on being human. There is a great story from the mid 1800s by Edgar Allan Poe called “The Man That Was Used Up.” It is about a general who has a secret. Spoiler alert: he turns out to be mostly assembled of prosthetic parts which have to be put together every morning. And finally here is a recent take, a still from Star Trek The Next Generation. In this scene “7 of 9,″ a human who has been augmented to become part of the Borg explains why the Borg are superior to humans. But in the series humans defeat the Borg. So throughout history we have worried about being less than human through the metaphor of the time: biology, mechanics, computers.

Now this talk is part of the Human Augmentation track, so let’s take a look at augmentation, starting with the body. As it turns out, I have a small augmentation in the form of a dental implant. And that is a type of augmentation of the body that is very old. Here is a picture of dental implants from more than 1,000 years ago. Here is another very common type of human augmentation: glasses. Now you might say. Gee Albert, you don’t understand augmentation. Dental implants and glasses just give you back some functionality that you lost. But once you take that seemingly small step it is rapidly possible to expand on capabilities. For instance, instead of just vision, you can now have night vision. Now you might say: yes that augments your capabilities but it is not “augmentation” because the night vision glasses are external and not fused into the body. But that is a somewhat misleading distinction. Here is a picture of a defibrillator. It is an external way of restarting a human’s heart. And here is an x-ray image of a pace maker. Some pacemakers just keep the heart beating regularly, but others also act as a defibrillator. In both cases we have fundamentally augmented what is possible for a human. So: humans have augmented the body for a long time, we will continue to do so going forward and whether or not the augmentation is physically implanted is at best a secondary consideration.

Let’s shift to considering augmentation of the mind. This too is something humans have done for a very long time. The abacus, for example, was invented several thousand years ago to augment our ability to compute with large numbers. Here is a more recent augmentation: the ability to get to places without having to read and interpret a map. And of course more recently we have package that into our phones. Again you might say: but Albert, these are not augmentations because they are external to the body. Just as with the example of the defibrillator this seems like an artificial distinction. And furthermore many of us are so close to our phones that when we misplace it we feel like a part of us is missing. This morning on the way here I shared a cab with an entrepreneur who for a moment thought they had left their phone at the hotel and they were super agitated by that. If we are honest with ourselves, I think many of us feel the same way. So yes, if you want to be a stickler you might say that it’s only augmentation if it is directly connected to the mind Matrix style. And if that’s really what you are looking for, we are well on our way. Not just with companies such as Elon Musk’s Neuralink and Brian Johnson’s Kernel. But we are doing it today already with Cochlear implants. These have external signal processors that then connect directly to the acoustic nerve. So we are basically pretty close to a direct brain connection. Again though the key point is that we have been augmenting our minds for a long time and we still consider ourselves human.

So what then is critical to our humanity? It is not the shape of our body, nor the specific way in which our brain works. Those are not what makes humans human. What then is it? In my book World After Capital I argue that it is knowledge. In this world only we humans have knowledge, by which I mean externalized recordings such as books or music or art. I can read a book today or see a piece of art created by another human hundreds or even thousands of years ago and in a totally different part of the world. We share lots of things with other species, such as emotions, some form of speech and consciousness, whatever exactly that turn out to be. But knowledge is distinctly human. No other species has it.

Knowledge comes from the knowledge loop. We learn something, we use that to create something new and we share that with the world. That loop has been active for thousands of years. We each get to participate in this loop. And we get to do so freely. That turns out to be the crucial feature of what it means to be human: we reap the collective benefit of the knowledge loop but we participate in it freely as individuals. That is the big difference between us and the Borg. And that is also what we need to keep in mind when working on augmentation. We must be careful to assure that it increases, rather than limits, our freedom to participate in the knowledge loop.

And there is real risk here. Think about a brain link for example. It could give you much more direct access to the knowledge loop but it could also be used to prevent you from participating it. I recommend Ramez Naan’s Nexus, Crux, Apex series that deals with exactly this set of questions. Like all technology, human augmentation can be used for good and for bad. Let’s all try to work hard to use it for good.

Thursday, November 30, 2017 - 7:30am

The last few weeks in Uncertainty Wednesday, with the exception of my net neutrality post, we have been looking at the relationship between sample data and distributions. Today is a bit of a recap so that we know where we are. One of the reasons for writing this series is that in the past I have found that it is super easy to get into lots of detail on mechanics and in the process lose sight of how everything hangs together.

So now is a good time to remind ourselves of the fundamental framework that I laid out early on: we have observations that provide us with signals about an underlying reality. Uncertainty arises because of limitations on how much we can learn about the reality from the observations. We looked at both limitations on the observations and limitations on explanations.

In the posts on samples and how they behave we have been working mostly in the opposite direction. That is we assumed we had perfect knowledge of the underlying reality. For instance, in the first post we assumed we had a fair dice that produced each number from 1 to 6 with exactly probability 1/6. In a later post we assumed we had a perfectly Cauchy distributed process. In each case we then proceeded to produce observations samples *from* that assumption.

Sometimes people call this the study of probability and reserve the term statistics for going the opposite direction, the one we are usually interested in, i.e. from the observations to improved knowledge about the underlying reality. Another term that you will hear in this context is “inference.” We are trying to infer something about reality from the data .

What then should be the key takeaway about inference from the last few weeks? That for some realities we can learn a lot from even relatively small samples, while for others that is not possible. Making this statement more precise will be a big part of Uncertainty Wednesday going forward. But for now you may have an immediate allergic reaction to the implied circularity of the situation. We are trying to learn about reality from observations but we don’t know how much we can learn unless we make assumptions about which reality we are dealing with. Welcome to uncertainty.

How do we cut this circularity? We do so only over time through better explanations. Explanations connect our observations to the reality. We start with a pretty bad explanation, which results in poor inference and a cloudy view of reality. We will then often use that view of reality to make predictions and compare those to future observations (possibly from experiments). Discrepancies arise, which lead us to consider different explanations. Some of those will emerge as better. They make better predictions. They fit better with subsequent observations.

This is why explanations are central to understanding uncertainty (and central to all of science). Too often, however, treatments of uncertainty make all sorts of implicit assumptions. For instance, assumptions of normality or at a minimum of thin tails abound (when we saw that fat tails behave wildly differently). Even when the distribution assumptions are explicit, they are often not related to a specific explanation.

Monday, November 27, 2017 - 11:35am

Over the holiday day weekend I did a lot of driving in a loaner Tesla (we have ordered one ourselves but it is, ahem, delayed). Well, actually, the car did a lot of the driving. I made extensive use of “Autopilot” features, including the smart cruise control and the autosteering. New cars by other automotive brands have similar capabilities. Long before getting to fully autonomous cars, I am blown away by how immediately transformative this experience is for highway driving.

For me there were two immediate and profound changes. The first has to do with being in stop and go traffic, which one often encounters on the highways to and from New York, such as heading out to JFK. I usually hate this, because the tedium of stop and go makes the time feel that much longer. Autopilot transformed this experience. Now some of that is the novelty effect for sure, but being able to fully engage in a conversation as opposed to having a big part of one’s brain tied up in not hitting the car in front of you (but also not having a huge gap), made the time go by much faster for me.

The second has to do with speeding. We drove up the Taconic Parkway, which is notorious for aggressive ticketing for speeding. Here too Autopilot was a game changer. I realized that speeding is something I do to keep myself busy while driving. And then of course occasionally I speed for the opposite reason, meaning going downhill and picking up speed while in conversation. Again I may be smitten with the novelty effect, but just letting the car do the work at a safe increment to the posted speed limit (a couple of MPH faster) made me perfectly relaxed.

Now at present Autopilot requires you to keep your hands on the steering wheel. You can actually take them off but then you will get a prompt at irregular intervals to put them back on and if you don’t do that quickly enough, the Autopilot disengages for the rest of the trip! This happened to me a couple of times and immediately felt like the loss of crucial functionality (Hint: if you can pull over and hit “Park” the car resets and you have Autopilot again.)

Following this weekend, I can’t wait to have Autopilot permanently. I hope that for the highways I drive frequently, it will soon no longer require having my hands on the steering wheel. Getting to and from places will have never been easier!

Wednesday, November 22, 2017 - 11:30am

Just imagine for a moment the world we can easily find ourselves in. You love my series of blog posts called “Uncertainty Wednesday” but when you try to access it instead of seeing the content, you receive a notice from you ISP (the company you pay to access the Internet), that Continuations is not included in your current plan. You need to upgrade to a more expensive plan to see any content hosted on Tumblr.

This is not some kind of far fetched hypothetical possibility. Without Net Neutrality that’s exactly what will happen over time. We do not need to speculate about that, we can see it in countries that do not have Net Neutrality. Here is a picture from a carrier in Portugal

image

Now you might say: but isn’t it good if this makes services cheaper to access? What if someone can only afford 5 Euros per month, here at least they are getting some access?

But asking the question this way is buying into the ISP’s argument that they should get to decide which services you can access. Any one of the bundles above effectively requires a certain amount of bandwidth from the carrier. It should absolutely be the case that a carrier can give you less bandwidth for less money. But then with whatever bandwidth you have purchased you should be able to do as you please.

I have explained here on Continuations extensively why Net Neutrality is required for last mile access due to the lack of competition. So I am not going to rehash that again, you can read it at your leisure and so far without having to pay extra.

Net Neutrality is once again under attack. Ajit Pai, Chairman of the FCC, has announced his plan to “restore internet freedom” which is, as it turns out not your freedom as a consumer to use the bandwidth you have purchased as you see fit, but rather the freedom of your ISP to charge you for whatever it wants to.

So if you don’t want to wind up with the Portugal situation from above, go ahead and call Congress. Thankfully the website Battle for the Net makes this super easy. Do it!

Monday, November 20, 2017 - 11:30am

I have mentioned here on Continuations before that we have been home schooling our children. The main reason for doing so is to give them plenty of time to pursue their interests. Interest that over time can deepen into passions and have the possibility of ultimately providing purpose. For our son Peter one of those interests has been fashion. He has been learning how to sketch, cut, sow, etc. since age 8 and now at 15 has put together his third collection. This one is Men’s Wear and for the first time he is making it available for sale.

I particularly like the Bomber Jacket above. I am definitely not cool enough though to wear the Kilt:

You can find more pieces from the collection at Peter’s web site Wenger Design.

Friday, November 17, 2017 - 5:05pm

One of the problems with a relatively open platform such as Twitter is impersonation. I can claim to be somebody else, upload their picture to my profile and tweet away. This is particularly problematic for public figures and businesses but anyone can be subject to impersonation. Years ago, Twitter decided that it would “verify” some accounts. 

While a good idea in principle, Twitter’s implementation, sowed the seeds of the current mess. First, Twitter chose to go with a heavily designed checkmark that looks like a badge. Second, this badge appeared not just on a person’s profile but prominently in all timeline views as well. Third, the rollout appeared geared towards Twitter users who were somehow cool or in-the-know. Fourth, Twitter seemingly randomly rejected some verification requests while accepting others. 

The net result of all of these mistakes was that the verified checkmark became an “official Twitter” badge. Instead of simply indicating something about the account’s identity it became a stamp of approval. Twitter doubled down on that meaning when it removed the “verified” check from some accounts over their contents, most notably in January of 2016 with Milo Yiannopoulos.

Just now Twitter has announced a further doubling down on this ridiculously untenable position. Twitter will now deverify accounts that violate its harassment rules. This is a terrible idea for two reasons: First, it puts Twitter deeper into content policing in a way that’s completely unmanageable (e.g., what about the account of someone who is well behaved on Twitter but awful off-Twitter?). Second, it defeats the original purpose of verification. Is an account not verified because it is an impostor or because Twitter deverified it?

What should Twitter have done instead? Here is what I believe a reasonable approach would have been. First, instead of a beautifully designed batch, have a simple “Verified” text on a person’s profile. Second, do not include this in timeline views. It is super easy from any tweet to click through to the profile of the account. Third, link the “verified” text in the profile to some information such as the date of the verification and its basis. For instance, “Albert Wenger - verified October 11, 2012 based on submitted documents.” 

This type of identity-only verification would be quite scalable using third party services that Twitter could contract for (and users could pay for if necessary to help defray cost). Twitter could also allow users to bring their own identity to the service including from decentralized systems such as Blockstack. It would also make it easy for people to report an account strictly for impersonation. Harassment on platform is a real problem, but it is a separate problem and one that should be addressed by different means.

Wednesday, November 15, 2017 - 11:30am

Today’s Uncertainty Wednesday will be quite short as I am super swamped. Last week I showed some code and an initial graph for sample means of size 100 from a Cauchy distribution. Here is a plot (narrowed down to the -25 to +25 range again) for sample size 10:

And here is one for sample size 1,000:

Yup. They look essentially identical. As it turns out this is not an accident. The sample mean of the Cauchy distribution has itself a Cauchy distribution. And it has the same shape, independent of how big we make the sample!

There is no convergence here. This is radically different from what we encountered with the sample mean for dice rolling. There we saw the sample mean following a normal distribution that converged ever tighter around the expected value as we increased the sample size.

Next week will look at what the takeaway from all of this. Why does the sample mean for some distributions (e.g. uniform) follow a normal distribution and converge but not so for others? And, most importantly, what does that imply for what we can learn from data that we observe?

Albert Wenger is a partner at Union Square Ventures (USV), a New York-based early stage VC firm focused on investing in disruptive networks. USV portfolio companies include: Twitter, Tumblr, Foursquare, Etsy, Kickstarter and Shapeways. Before joining USV, Albert was the president of del.icio.us through the company’s sale to Yahoo. He previously founded or co-founded five companies, including a management consulting firm (in Germany), a hosted data analytics company, a technology subsidiary for Telebanc (now E*Tradebank), an early stage investment firm, and most recently (with his wife), DailyLit, a service for reading books by email or RSS.