Think tanks and block chains

Anil Dash:

I teamed up with Kevin McCoy to create monegraph. It’s a system that uses the block chain technology which underpins Bitcoin, but puts it to work in service of artists, so that they can verify that a digital work is an original, with a verifiable provenance. I describe the context of the work in A Bitcoin for Digital Art, my first piece for Medium’s “The Message” collection, and we also showed it off with a demo at the most improbable of venues, TechCrunch‘s Disrupt conference. The response overall has been great, as you can tell from the monegraph tumblr.

This is exciting, an eye-opener, and the type of work you’d expect FAT to produce. Nevertheless, it is clear now that the block chain concept can be put to use onto diverse domains and redefine them.

From Anil’s post I also found out about Data & Society; an NYC-based think/do tank focused on social, cultural, and ethical issues arising from data-centric technological development. Data & Society also contributed to The White House’s Big Data and Privacy Working Group’s review.

Snapchat’s future

Our most basic need right after survival and concurrently a key quality of the human condition is the need to connect and communicate. I’m an avid Snapchat user for almost a year now and sold thenceforth on its ephemeral premise. A friend from the States dragged me into it long before it went huge in Europe as well1 — a great idea in retrospect, since it eliminated a lot of communication friction, both formal and informal. It connects.

Snapchat’s “[…] pure sharing, the ‘lowest-commitment form of communication.’”.

I’m also a Casey Neistat fan. And today Casey posted this:

Casey Neistat x Snapchat

I followed suit and within seconds I was watching random segments from Casey’s Cannes trip for the Lions. This experience was new; unique; social (despite being “one way;”) direct; creative; raw.

Since Neistat doesn’t have me in his contacts (read: “follow”) it reminded me of Twitter; everybody can follow you but you choose whom you’ll follow. Only this time it isn’t about fragments of one’s life set apart by brevity and 140 characters (sometimes) rather than visuals — photos and video.

A new way to connect with people by living the latest highlights of their lives as if you were there; not reading about them or have them stored in your messages feed forever. You don’t need this anymore. “This is me, this is what I do now, pay attention by not paying attention and connect with me.”

My argument regarding Snapchat’s future, though, is not related with the consumer facet of this feature. It’s about Snapchat’s business. Bubble or not there’s a rich background to Snapchat both in and of itself as a new medium and as a startup.

This novel new way of sharing and most importantly connecting whether it’s friends with friends or fans with artists — provided being executed well — can drive extraordinary growth for the service with much higher engagement ROIs and better KPIs overall. The Stories feature did exist for some months but only now does Snapchat start to capitalize it. Its upgrade, “Our Story,” is an even better opportunity for big brands, artists, celebrities, et al. $3 billion isn’t crazy anymore.

In the meantime, Kanye West shared some relevant thoughts about branding, design, and communication at the Lions. “People ask, where’s our future? Where’s our flying cars? That is the world that’s floating above us right now.” That’s the Internet and that’s what it’s all about: connecting us.

Update: Hate to say it but: I told you so.

Update 2: Told you so.


  1. Ephemeral hipster. 

Automate Git, g++, Dash with Bash

I was browsing around Github when I found Jim’s Git-aware prompt repo. A nice bash hack to display the current git branch name in your terminal prompt when in a git directory. After installing it struck me1 — I can automate tedious terminal commands with Bash aliases.

The general concept is that a Bash alias doesn’t accept parameters itself but it can call a function which does.

g++ compiler alias

I build my C++ source files with g++ -Wall -O3 foo.cpp -o foo. Now that’s a futile life; typing this multiple times a day. But thank Richard Stallman for Brian Fox. Edit your .bashrc or .bash_profile and add the following:

The cut command says to cut the $1 string (the first parameter) until the ‘.’. That’s necessary unless you want your source file to be replaced with machine code. Hint: you don’t.

Now type build foo.cpp and voilà — thy source code shall appeareth before thou.

Dash alias

Dash is a documentation browser. I’m using it in Sublime Text (there’s a package for it) and just today I found out that it can be invoked from terminal as well with open dash://foo in order to search for foo in the documentation. Instead of manually typing all this every time I alias’d it.

Now simply type dash foo and Dash pops up.


  1. I guess know I’m really late to the party, but who cares anyway. 

On which a computer passes the Turing Test for the first time

Yesterday, a computer simulating a 13-year-old boy named Eugene passed the Turing Test at an event organized by University of Reading at London’s Royal Society. This is a huge and remarkable breakthrough in the AI front.

The Turing Test is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. It is, essentially, a conversation between three parties—Player A (human,) Player B (machine,) and Player C (human)—which are separated from one another. If the judge (Player A) cannot reliably tell the machine (more than 30% of the time) from the human, the machine is said to have passed the test. Until yesterday it had never been achieved. The Turing Test doesn’t check the correctness of the answers, rather how closely the answer resembles typical human ones.

Quoting Alan Turing:

I believe that in about fifty years’ time it will be possible, to programme [sic] computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

Yesterday also marked 60 years since Turing’s death.

This marks a very important step forward for the AI (and the general Computer Science) community. We’ve made—for the first time—a machine that is capable of being so smart and fluent with human language and cognition, one can’t tell it’s not a human. Such an event paves the road for superintelligent machines, better NLP, artificial neural networks, along with better answers to philosophical questions such as “can a machine have a mind, consciousness and mental states?,” “can a machine have emotions?,” and “can a machine be self-aware?”1

 The rabbit hole goes deeper

The philosophy of artificial intelligence is a broad field and currently one of the most intellectually stimulating ones. Intelligent machines are closely related to questions about the deterministic nature of our universe; if it’s completely describable programmatically or even a simulation, if there are multiple (perhaps infinite; containing all logical possibilities) other universes (or universe simulations), if the simulation is written by another species or a sapient machine or if it’s a simulation within another simulation.

Some (like Price and Hamilton) argue that humans are self-replicating machines themselves. I briefly touched this topic at the end of my Google Glass review last year:

In 1967 George R. Price went to London after reading Hamilton’s little known papers about the selfish gene theory and discovering that he was already familiar with the equations; that they were the equations of computers. He was able to show that the equations explained murder, warfare, suicide, goodness, spite, since these behaviors could help the genes. John Von Neumann, after all, had invented self-reproducing machines, but Price was able to show that the self-reproducing machines were already in existence, that humans were the machines.

Juergen Schmidhuber famously said at his TEDx talk (which I can’t stress properly how much of a “must-see” it is; arguing about this universe and our own lives being just by-products of a very simple and fast program computing all logically possible universes) that “to the man with a computer, everything looks computable.”

Now, if our reality is indeed deterministic, then it’s also completely describable (and thus, predictable) by a computer. Hence, a superintelligent computer will have much more executive and cognitive power over this very domain. Moreover in this case it’s also easier to describe the world itself to this machine. Once I argued that, perhaps, “if our universe were to be a computer simulation then Deterministic Finite Automata would be to it what particles are to physics models” but my friend Panagiotis counter-argued my hypothesis with an even better, interesting, and in our case, extremely relevant to the Turing Test, one:

That’s very interesting topic and I keep thinking about it constantly.

Let me first give you a proof why particles are not the equivalent to DFAs. Your computer is the equivalent of a turing machine and it’s built by particles. So assuming particles are equivalent to DFAs, a model emerged from a DFA can’t be more powerful than a DFA. But, by definition Turing Machines are more powerful than DFAs, thus particles can’t be equivalent to DFAs.

What is more, a human beeing might be more like a turing machine. That comes from the fact that DNA/RNA itself seem to be a turing machine and models emerging from it can’t be more powerfull than a turing machine. Thus, a brain probably is equivalent to a turing machine, capable of running other turing machines as well.

So, here comes my point. Given that the brain is a turing machine, that means it can be simulated and for me that means that also the perception or the “soul” can be simulated as well. Free will is just a perception. For me and you, there will always be a machine that can simulate us. Laplace in a similar fashion introduced the thought experiment of a demon being able to predict the future given it has full knowledge of the current state. Determinism comes from the fact that someone knows the exact current state. Let me give you another example. For a computer there is no random thing. It can always know what’s the next number so for a computer there is only determinism. However, for the human perception, which might not have access to the current internal state of it, does this actually matter? Will you feel less lucky if you win the lottery from numbers generated by a computer?

I believe having full knowledge is impossible and thus determinism is well hidden under this constraint. In game theory that’s the equivalent of incomplete information. Thus, the free will is just ability of organisms to create strategies to cope with that incomplete information. Determinism doesn’t contradict free. It just emerges from our limited capacity of predicting the future.

Let’s not forget that a deterministic universe means that we lose our free will. As Panagiotis said above “Free will is just a perception.” Don Knuth said also something apt to me last year when I asked him about a related topic:

[…] and if our universe is a computer simulation, which means we’re simply mathematical representations and everything is deterministic and, as a result, we lose our free will, then there’s nothing we can do about it and we cannot answer it, thus we shouldn’t bother thinking about it.

It is obvious now that the Turing Test is relevant not only to abstract sapient-or-not machines for “conversation games” but also to physical and biological systems like the DNA. The implications of computability (and especially, intelligent computing) are enormous. Yesterday we went one step closer to intelligent computing; something we couldn’t fathom even a few years ago. What an exciting time to live in.


  1. A new avant-garde field in bioethics is “theoretical bioethics” which argues whether software can suffer. And if so, what are the ethics of creating, modifying and deleting it from our hard drives. 

Hello, Yosemite

I wrote my thoughts about Monday’s WWDC elsewhere.1 Today I installed OS X Yosemite on a new partition in my 15″ MacBook Pro Retina (2.3 Ghz i7, 16GB RAM)2 and I only have one word: wow.

Yosemite

I’ll briefly elaborate on two of Yosemite’s aspects as of this beta 1.

Interface

It is gorgeous. Amazing. Beautiful. Helvetica Neue is a great fit. The new dock is lovely. Even the vivid blue of folders is ok.3 Transparency—although often buggy or ommited due to being beta 1—felt natural and appealing. Safari looks stunning. Other Apple apps which got a face-lift (like, Mail, Calendar, iMessages) are a perfect fit with the new semi-transparent window chrome. Using Mavericks now feels almost like what iOS 6 felt after using iOS 7.

Performance

Time for the bad stuff. Well, even for my computer Yosemite’s quite slow and it’s not only the UI. Definitely not a fit yet for a primary machine. But hold your pitchforks right there before start going mental against Cupertino: it’s only beta 1. Which, for Apple means something like early post-alpha for the rest of the world. Things will get better, faster; and I expect this to happen around beta 3 or 4.

All in all, OS X Yosemite—even now—is stunning.

And never forget what John Siracusa wrote on April 2, 2001:

I’ve said it before and I’ll say it again: the “X” is pronounced “ten”, like the roman number, not “ex” like the letter. Don’t make me come over there.

Update: I’ve deployed Yosemite in my late-2009 iMac and I’m happy to report it works smoothly. I’ve made it my primary boot partition. Exciting (and weird since on the Retina MacBook it lagged. I guess, post hoc, the choppy performance had to do with the retina UI.)

Update 2: Mac OS X Yosemite under the magnifying glass. A very nice round-up of all the visual changes in Yosemite.


  1. Seven Twenty Three isn’t exactly ‘elsewhere’ per se; it’s a new blog I co-started with my friend Konstantinos

  2. This is relevant to Yosemite’s performance; see Performance section. 

  3. I bet, though, the exact color will change until the Fall release. 

10000 things all CS students should do before graduating

  • 0000 – Buy your own domain name.
  • 0001 – Install an Apache web server and configure it in a non-trivial way, e.g. serve multiple domains.
  • 0010 – Install WordPress and have your own blog. Write blog posts regularly. Write well. Good writing is a critical skill to master in this profession.
  • 0011 – Run your own web site at home or in a hosting company.
  • 0100 – Write at least one complete LAMP web app, preferably two—one where P = PHP and another where P = Python.1
  • 0101 – Have your own (physical or virtual) server on the cloud.
  • 0110 – Install VMWare (or an equivalent) in order to boot up your laptop with more than one OS.
  • 0111 – Configure your home DSL router in order to serve a website or another kind of server from your home machine/laptop to your friends.
  • 1000 – Use a packet sniffer to learn about the network requests your computer does to your favorite game server.
  • 1001 – Make contributions to an open source project.
  • 1010 – Write an app that uses at least one of the popular web APIs like Facebook Connect or one of Google’s.2
  • 1011 – Use Google AdSense on your website and make money just by virtue of attracting traffic.
  • 1100 – Compile a complicated open source project from scratch, like OpenSim or Matterhorn.
  • 1101 – Read works of literature and, besides enjoying the ride, pay close attention to how the author tells the story and makes use of words. Your programs should be as carefully written as those works of art. ((Code is poetry.)
  • 1110 – Get yourself involved in a software project where requirements are bound to change halfway through—that’s about 0.01% of homework projects and about 99.99% of real world projects, so find one of the latter kind. Finish the project with patience and the ability to take criticism in a constructive way.
  • 1111 – Write an application using MapReduce. Run it on Google App Engine or Amazon EC2.

(via Diomidis SpinellisCristina Videira Lopes)

On the top of my head I’ll add: study lots of Logic; have Bertrand Russell as your mathematical hero; read GEB; build a mobile app; favorite Github repos the same way you Instapaper an article—be interested but, alas, never come back; start writing in LaTeX (and sometimes in Markdown); reduce your dock size to two icons (if you’re using OS X, that is); “real men reduce from 3SAT don’t use IDEs;” find serene peace when writing code only to come back after a few days and spend 20 minutes figuring out what you’ve written before; appreciate the Internet; write recursive functions which terminate themselves without if’s, switches, ?: operators, or loops.


  1. And one where P = NP, right? 

  2. I hear Twitter is trendy these days. 

Dispatches from the contextual world — my ACSTAC 2014 talk

On 3/15 I was invited to talk at ACSTAC (Anatolia College Science and Technology Annual Conference) at my high school alma mater Anatolia College. I talked about how the Internet, data, algorithms, and predictive AI shape the present and future of our personas and society, and how technology facilitates the creation of innovative startups and businesses. Below you can find the presentation’s slides and an edited transcript of my talk.

Here’s a simple truth: the Internet has radically changed our world and is, effectively, an augmentation of our brains’ memories. Over the course of the past 20 years the idea of networking all the world’s computers has gone from a research science pipe-dream to a necessary condition for economic and social growth. From government and university labs to kitchen tables and city streets. We are all travelers now, desperate souls searching for a signal to connect us all. And it is awesome.

With the advent of machine learning and artificial intelligence, machines, and by machines I mean both the actual hardware devices and the software, algorithms, and code running inside them and on the cloud, will be just like us and someday we almost won’t be able to tell the difference between us and them unless for our own Voight-Kampff test.

In order to try to understand the future we have to first understand the past and see how we’ve come to this very point in human history—to be on the verge of creating stuff that might be eventually smarter than ourselves.

Douglas Adams proposed three axioms to describe technology. That anything that already exists when you’re born is normal, that anything invented during your youth is exciting, and that anything invented after thirty-five is against the status quo. The Internet is relatively a very young technology. The World Wide Web just turned 25 few days ago. Despite its huge penetration in our society in an unprecedented level we still do not fully comprehend it as a medium of communication, expression, business, research and the list goes on. We know our approach towards the future is one of fear. We’re afraid of the unknown, of the uncontrollable and that’s only natural. Sometimes we look like whiny pessimists who complain about how things are somehow worse than what they have been before and feel entitled to this feeling. But we’re wrong.

Because in order keep growing, to keep getting better as a society and people we need to strive for progress, for the new, for innovative stuff—we need to create. How do we define progress then? Progress is the accelerated rate of change – and the Internet is producing more progress than progress ever dreamed of. Why are we so quick to assume then that the future is a hybrid of a Stanley Kubrick/Blade Runner-like dystopia? When you look those people on the bus with their newspapers you don’t think they’re antisocial. Because it’s natural to you as per Douglas’ first axiom. We think that technology and the Internet have made our lives somehow worse – or that they try to – but that’s just nonsense. Never before we were able to communicate the way we communicate now, this very way our generation was born with. I’m online since 1999 and I always felt, like, wow!, everything in my life is richer and deeper thanks to new technologies. The opportunities for expression, business, and innovation are greater than ever before. Our world changes and that’s not as bad as it seems like and despite technology’s shortcomings, it gives voice and freedom to create to so many.

Anytime a new technology gets introduced we, the tech community, try to reanswer all these questions. This xkcd comic strip highlights this conundrum perfectly. Yes, we might be alienated and perhaps more than before but it’s not the Internet’s fault—in fact, research shows it makes us more connected with our friends in real life. We got through the alienation of the telephone, that books will make us forget, that the telegraph will negatively impact our skills to write prose. We’re still here and our world is more than ever before connected, social, intelligent, educated. Better.

We didn’t get flying cars but we can read what happens in the world in real-time thanks to Twitter. Wikipedia, Khan Academy, and Coursera are the closest thing to Matrix-like instant brain downloads we have today.

The phrase I mentioned a bit earlier is a very interesting one – “real life”. What’s real and what’s not? As Morpheus said in The Matrix everything is just electrical signals that our brain interprets in different ways. Reality is confusing. To elaborate: the term digital dualism was coined by researcher Nathan Jurgenson and is the notion that people conceptualize the world into online and offline—and as Nathan says, it’s not true anymore—in fact, it was never true but we only now seem to understand it. The common (mis)understanding is that experience is zero-sum: time spent online means less spent offline. We are either jacked into the Matrix or not; we are either looking at our devices or not. But the Internet isn’t an adjunct to real life; it’s not another place. You don’t do things “on the internet,” you just do things. The network is interwoven into every moment of our lives, and we should treat it that way.

Almost always the experience of each succeeding generation is so different from that of the previous one that there will always be people to whom it seems that any connection of the key values between those of the present and those of the past has been lost. People before the Internet were not measurably wiser, sitting around discussing Proust. They were watching Married With Children and playing video games.

After this brief theoretical discussion of the Internet’s nature let’s see where we are right now. Where we’ve come to. I mentioned that we’ll cover wearable computing devices and contextual services. Let’s start with the former.

Take for instance Jawbone Up and Nike Fuelband. They’re able to record where you go, how many steps you make a day, how fast or slow, how good you sleep and each sleep phase you go through every night. They’re able to measure calories burned and much more—and some, like the Nike Fuelband, have also integrated a small social network to challenge your friends.

It’s not only about what they record though. It’s equally important to conceptualize and present data with a meaningful way. And this is where this whole thing gets interesting, as we do deeper in the contextual domain. Algorithms now are smart enough to be able to know what data is important to us and know how we should get this information. Foursquare, for example, can send you a push notification the moment you enter a new coffee shop and offer an insider’s tip – or your friend’s for that matter, without you doing anything. And with Square you can order and pay automatically when you enter a venue.

Smart watches are going to be a thing in the future—they’re just not there yet. They are deeply tied with your smartphone and can display stuff like caller info, new texts, calendar notifications, and more. They’re a bit clunky now and many consumers are not quit sold on their premise but they have the potential to be quite a player in the market.

Other wearable computing devices are mounted on the head instead of our wrists. This ski mask for example can show how fast you ski, your location in the ski resort and what’s nearby. It also integrates with your phone to show you caller info, new texts, and other relevant information. As a former ski racer this is extremely exciting because it means in the future we can also integrate stuff like training routes, ideal ski line during the race, time intervals, and so on.

Another subset of wearable devices are sensors – which are going to be huge. Sensors will have paramount implications in the retail, med-tech, transportation, and home automation industries. Here we see Tile, a small sensor that we can attach onto pretty much everything and be able to track that particular item. Other popular sensors are for example the Proteus pills which are pills with an integrated digital sensor which transmits real-time medical data to the doctor and also shows if a patient has received his medication or not, and iBeacon – Apple’s new under-looked technology. iBeacon can radically transform our retail experiences as we’ll see in a moment.

But the pinnacle of things wearable and contextual is Google Glass. After decades of self-reflexive irony and endless retromania, pop culture finally seems to be able to rediscover its futurist leanings. Glass is extremely important for a few reasons. First of all, Glass itself is not something entirely new—products like this existed in research labs for about 10 to 15 years now—but it’s the first major product from a major company (actually one of the world’s largest) which in a way says “time’s almost now.” For it’s irrelevant how well or bad Glass will retail as a consumer product. Its potential is bigger than its market performance. Glass signifies the very notion of machines founding a way on us and soon inside us. Glass won’t stay the same in the future of course—it’ll evolve into contact lenses (already did) and later it’ll be an implant in our retinas. The potential is mind-boggling. It might be scary but it’ll be awesome. To say it differently, Google is proposing that there is value in a totally new product category and a totally new set of questions. Just like the Apple II proposed—that, would you reasonably want a computer in your home if you weren’t an accountant or professional? If you only new back then. That is the question Glass is asking, and I hope in the end that is how it will be judged. Much like the first Macintosh signified the era of the personal computer, Glass signifies the era of us, humans, being almost one with the machines.

Moreover, Glass is a huge paradigm shift: whereas “virtual reality” provided us with a simulation of the real that remained separate from the real, Glass turns the real into a simulation of itself. We won’t be talking about “augmented reality” anymore – we will be “augmenting reality” at will, gesture, and voice command.

Glass can also easily integrate with Google’s data pool, hence capitalize it enormously and push personalization to a whole new level. Glass will be everywhere with you. In the grand scheme of things, it’s one of our baby steps to see how machines see, to augment and make exponentially more usable our world. One step closer to see like, utilize our immediate environment, and harness data with the power of machines — right in front of our eyes.

Glass might as well be our first digital hallucination. Things don’t show up when they want but when you want them to. When there’s nothing to show, there’s nothing to see; ergo you’re offline while always being online.

Imagine seeing this small card in the top-right corner of your right eye in a small projector-like monitor. Because Glass knows where you are it knows what information to show you. That’s context. It can be your boarding information or news from your favorite team when you’re watching the football game. Generally Glass has currently quite limited capability – it can take and display photos and videos, read and send email, texts, make and receive calls, making video hangouts, has turn-by-turn map directions, personalized Google Now suggestions, Google search, and of course supports apps.

We talked a lot about personalization through wearables and contextual services. It’d be nice to see the bigger picture. That this does not apply only to a few select people but to every Internet user. Imagine being able to personalize the world for at least 1.3 billion people around the world. It would be amazing.

Take for example, the already discussed iBeacon example. iBeacon is a contextual technology that can transform retail and other experiences from the ground up. You enter a store which has iBeacon installed and when you reach the t-shirt section for example, it might send a push notification recommending you one because your wardrobe will know which garment you wear and it’ll be able to recommend you additional items.

So you might be interested in that yellow t-shirt and you want to buy it. Just tap on the notification for a one-tap purchase with Square or Google Wallet or even your iTunes account or any other payments platform out there. And perhaps you might not even need to carry it back home because it’ll be delivered to your home within a few hours with an Amazon drone.

The following project is from an art gallery. This gallery based in Antwerp in order to trigger people to have a more interactive experience has installed iBeacons throughout its exhibitions and gives visitors an iPad in order to interact with them. Pretty amazing.

The possibilities with wearables, contextual services, sensors, and intelligent AI and machine learning are endless. It’s up to us to create innovative services that challenge the status quo and introduce something new which wasn’t possible before.

Of course all this is directly bonded with entrepreneurship. When you have that opportunity, where something you believe in should be possible and the technology is enabling that—that’s a pretty good way to start a business. And you should. Assuming it’s really exciting, assuming some people think that they’re going to change the world with you, and that they might make money in the process, then you can build whatever you want.

We have several major technologies that enable the new knowledge economy: predictive AI, recommendations through pattern recognition, contextual services and wearable computing devices. These broad categories with their subsets can be applied either in a B2B or B2C way on industry-verticals like health, home automation, retail experiences, government and city planning, transportation, and finance.

And here’s a list of a few companies that do this stuff in one way or another. Google with Glass and its driverless car, Proteus with the sensor-enabled pill, foursquare redefining the way we experience our cities, Nest making our homes smarter, Estimote introducing iBeacons everywhere, Uber might introduce driverless cabs, and Waze which transforms the way we drive by showing real-time information about the road, accidents, traffic, police, and more. As world-renowned Silicon Valley VC Marc Andreessen said, “software is eating the world.” Originality often consists in linking up ideas whose connection was not previously suspected. We can’t predict the future—we can only say it’ll be exciting.

Connectivity is the basic assumption and natural fabric of everyday life for us. Technology connections are how people meet, express ideas, define identities, and understand each other now. Older generations have, for the most part, used technology to improve productivity — to do things we’ve always done, faster, easier, more cheaply. For our generation, being wired is a way of life.

But you have to know that part of the work of your—and our—generation is going to be technological, using scientific ideas to serve the interests of society, and part of the work is going to be fundamentally human tied with qualities of the human condition—the human emotion—that dominate the whole of history. These things are not separate, but are inexorably linked. None of this is to say that social media, the web, and the Internet in general should not be critiqued. Indeed, they should and must be. However critiques of them should begin with the idea of a networked and connected society happening around us already.

The day Facebook bought SMS

Facebook bought messaging platform WhatsApp for $16+ billion. A huge amount of money by all accounts; an extremely bold and strategic move by Mark Zuckerberg, a guy who doesn’t knock around with innovator’s dilemma.

There comes a moment when you’re that big a company when you need to keep growing. A tipping point of sorts, when you either hunt or become the hunted. Facebook has completely dominated the developed/western world market with more than 1.5 billion users and naturally needs to expand to something beyond it in order to keep the growth rates coming.

Emerging markets—despite being often overvalued, sometimes overlooked, and others ignored—are the next big thing and in Facebook’s case the next “1 billion connected people.” Facebook’s core mission and Mark Zuckerberg’s personal raison d’être. WhatsApp has a monumental global presence and is the clear “messaging platform wars” winner.

TechCrunch’s Matthew Panzarino:

Instagram’s $1 billion sounds really lame now.

Update: Also relevant: Four numbers that explain why Facebook bought WhatsApp courtesy of Sequoia Capital, WhatsApp’s investors.