This blog post was inspired by a recent long conversation with Jay, Nicola, and Jesse about value in blockchains. In a previous post, Dillon and I explored the concept of merging blockchains. In light of this, I'd like to explore another concept in blockchains: hostile chain takeovers.
For context, most proof-of-work based cryptocurrencies (being the vast majority of them right now) have miners competing for block rewards, awarded proportionally to the amount of computational power they bring to the network. Networks like Bitcoin are some of the most profitable to mine, because of the substantial competition on the network, in part due to the recent value appreciation Bitcoin has gone through (yes it's down from ATH, but still 10x on the year ;))
This reminded me of the earlier days of Bitcoin – if you wanted to add new consensus-breaking functionality without a sidechain, you would clone or fork Bitcoin with new rules. Muneeb Ali previously worked on Namecoin, a human-readable naming system, forked from the Bitcoin blockchain. A few years later, he revealed that one mining pool controlled nearly 60-70% of the hash rate of the Namecoin network, breaking network security guarantees [0]. While that mining pool didn't do anything malicious, it showed that bootstrapping a proof-of-work blockchain from scratch is *really* difficult (and one of the reasons why Ethereum started).
And this doesn't just happen to less secure altcoins – it's happened to Bitcoin as well! In 2014, GHash.io controlled 51% of Bitcoin's network power [1], causing a worldwide scare and panic, and while they didn't do anything malicious, they definitely had the potential to. The incentive to take over the network at the time was limited to none, given if the price crash, GHash.io's expected return would be minimized.
Keep in mind, this can also happen on proof-of-stake based consensus systems – they also suffer from the same network value bootstrapping problem. PoS systems such as Casper and Tendermint have designed incentives to prevent forking in the network (whether this is good or bad). However, systems like these don't require the need for cheap electricity and commodity hardware, potentially amplifying the security (or lack thereof) by directly attaching security costs to the market price of the underlying commodity (on this note, good criticisms on PoS from Mark Wilcox [2] and Paul Storzc [3] that I recommend).
Long story short, all these events have shown that it's possible to take over blockchain networks for potentially malicious reasons, and there may be a couple of reasons for doing as so.
Why would anyone want to take over/break a blockchain? I envision a couple of reasons:
For most small currencies, it's probably fairly trivial to point some computational power at the currency, and take it over, destroying the value of the underlying coins. This also brings up a larger meta question – do miners have too much power? I'll leave you with two posts ([7] [8]) that explores this question further!
Test your knowledge of Ethereum and its underlying technology!
About four years ago, Olaf Carlson-Wee of Coinbase released a reddit post looking for more support staff to join the company. He created a Bitcoin test with some semi-advanced questions to gauge their knowledge (you can still take it here). Looking back in retrospect, the test seems relatively straightforward, compared to how the field has progressed as a whole. I decided to make an Ethereum SAT* to test your knowledge of Ethereum internals, in spirit of the Bitcoin test. Enjoy, and happy quizzing!
Ethereum is full of lots of exciting developments (it's a living science project!), so it was only natural to create an Ethereum version of the "Bitcoin Test". Given the depth of the Ethereum Project, I wasn't able to cover everything, unfortunately. If you have an interesting question about Ethereum, comment it down below! Answers can be viewed here.
Thanks to Dillon Chen for for giving me feedback on earlier versions of this post.
*Note: "SAT" is a registered trademark of the College Board, to whom I have no relation. Please don't sue me.
Have any questions or comments? Feel free to comment down below or shoot me a message on twitter @niraj.
ARKit has been getting a ton of attention recently, and rightly so. Several demos showcasing the technology from @madewithARKit have been going viral on twitter. I believe it's the most exciting thing in the AR/VR space since the Oculus Rift first came out. For some background, ARKit is an iOS 11 SDK that provides powerful, low-level access to camera/location sensor data for high-quality augmentation1. It was recently announced at WWDC 2017, and is currently available in beta preview to iOS developers, allowing them to get accustomed to the technology before the stable release this fall.
ARKit allows anyone with iOS programming experience to build AR applications on the iPhone. Typically, AR has been slow to customer adoption because: 1) expensive hardware (anything $99+) is necessary for a decent experience, 2) the lack of seamlessness and applications for the platform doesn't drive enough demand, and 3) lengthy set-up process necessary for these experiences. The beauty of ARKit is that it fixes all of these problems in a relatively cheap and effective way.
They've decreased the barrier to entry for developing and consuming AR applications, given that any iOS developer can now take advantage of the SDK. It's compatible with iOS, which opens it up to a much, much larger community than just game developers. Remember Pokémon Go? It took the world by storm by releasing a (fairly rudimentary) AR version of their popular game, driving millions of downloads. Just imagine when the rendering prowess and quality of camera data increases – Pokémon Go gets even better.
ARKit is a fantastic entry point for future, more realistic AR/VR hardware and software experiences, with this technology almost acting like a testbed for future technology.
I've heard arguments that ARKit "doesn't look nearly as good as traditional AR" or won't work because "users have to download new apps" (which apparently people don't download anymore? not true). While ARKit isn't nearly the most powerful augmentation you can get on the market, it's a great balance between access, form factor, and cost. Additionally, this platform will likely usher in a new wave of applications (and consequently, app downloads). Apple is democratizing access to a future platform differentiator!2
The technology adoption and readiness curves are intersecting
The beautiful thing about ARKit is that it's at the perfect intersection of the technology readiness and adoption curves. It's not too early (and not a toy like before), meaning the technology will work fairly seamlessly, leading to an overall good user experience. It's also ready for mass adoption, as anyone with an iPhone can take advantage of apps built with the SDK.
ARKit may signify the start of the "frenzy period" in augmented reality - one described well in this chart:
Apple's done a great job of making sure the technology isn't too cutting-edge, as building great products atop these technologies in the early days is difficult. This leaves Google's Tango project somewhat behind. They've fallen plague to crazy AR experiences and specs, while Apple is focusing on experiences and shippability. Similar to the iPhone focusing more on the experience of the phone, rather than the specs, they're executing on the same thesis here with ARKit. They've played their 'last mover' advantage extremely well.
Another good example of this phenomena in play is Snapchat's Lenses feature. Lenses, if you're not familiar, is a recently-released feature that allows users to superimpose new faces/objects into their environment, as so:The first popular AR company may not be an AR company at all:
The first popular "AR companies" will be data/information companies that just happen to have an AR tool attached https://t.co/cfx4HyPmXY
— Will Robbins (@whrobbins) July 27, 2017
Traditional AR plays have largely been closed source. Independent game and application developers want to keep their app sources private, and rightly so. The amount of quality AR developer talent is extremely limited, which is where ARKit can capture a lot of developer attention. Now that ARKit is available as an iOS SDK, and the iOS community has a strong culture of open-source software, we'll see a ton of cool applications built atop the technology. This makes the barriers to entry super low.
Additionally, Apple is building out a strong platform for developers to capture even more attention in the app store, such as pairing ARKit with existing Apple SDKs (Metal, SpiteKit/SceneKit) as well as new ones (CoreML).
The beauty of technologies like mobile phones, is that while being relatively simple hardware-wise, they incorporate a ton of features into a small package. See:
A good example of this is the many heart rate sensors on the app store. Instead of needing to buy a complex heart rate monitor or counting beats manually, these apps utilize your camera + flashlight to track your heart rate. While this may not be the most accurate system, it still works quite well and all fits into one small package. Ditto for sleep tracking, GPS, crowdfunded maps, and sending data through the audio port. The iPhone (probably) wasn't designed with these applications in mind, but the platform is modular enough to allow them.
ARKit similarly holds this same potential for interesting non-traditional use cases. I'm excited for simple measuring tools (who carries around a ruler with them???), placement of shopping products in your home, and even more intimate guided city tours. See:
🤔 measuring a kitchen shouldn't be this satisfying...like, at all 🤔 https://t.co/7nhacasdnO → app by @SmartPicture3D 📏 pic.twitter.com/eztjNbDVyL
— Made With ARKit (@madewithARKit) July 12, 2017
This post wouldn't be complete without some cool early demos showcasing the power of the technology. Here are some of my favorites:
ARKit + CoreLocation, part 2 pic.twitter.com/AyQiFyzlj3
— Andrew Hart (@AndrewProjDent) July 21, 2017
🐍 Watch your step → inter-dimensional iPhone portals are closer than they appear 🧐 https://t.co/pqc0fRhUiQ ARkit demo by @nedd 👌 pic.twitter.com/zklYWr8CYk
— Made With ARKit (@madewithARKit) June 30, 2017
🛋️ Redecorating job? Watch how we roll in 2017 😋 https://t.co/379OeOR8w4 by @AsherVo 🦄 pic.twitter.com/ej4JO2jqP8
— Made With ARKit (@madewithARKit) July 21, 2017
Special thanks to Will Robbins, Viktor Makarskyy, and Jay Bensal for reviewing this essay. I can be reached on twitter @niraj.
I recently saw a tweet from Ilya Sukhar that particularly resonated with me:
Google needs to make "Parse for AI" to wedge themselves deeply into apps even when on other's platforms/cloud.
— Ilya Sukhar (@ilyasu) October 5, 2016
I've been interested in this space for a while. A broad prediction I have for the coming years is that, as a developer, you won't need to be proficient in machine learning to take advantage of its power. The technology is becoming increasingly democratized and opening up access to millions of new developers. Eventually, you won't even need to know how to program to perform data analysis with ML. In data warehousing, data analysts using old, traditional BI stacks will have access to a powerful new set of machine learning tools. In fact, in the future, using ML may be more about manipulating data, rather than hard mathematics or statistics (h/t Wiley for the comparison). We're moving away from obscure mathematical derivatives to teaching surface area to 4th graders.
A close comparison of this advancement is the proliferation of web development as we know it today. The development of a web application looks a whole hell of a lot different than earlier in the internet days. Before, you needed a strong knowledge of TCP/IP, Solaris servers, Oracle databases, etc. to build a web application. Eventually, these tools were abstracted into frameworks (Perl, Ruby on Rails, Bootstrap) and tools (AWS, Heroku, Parse), making the process of building, deploying, and scaling much easier. Taking it one step further, tools are even being built for non-developers to build apps (Treeline being a good example).
In the machine learning world, we're moving away from the TCP/IP days into the Ruby on Rails days. With limited ML background, it's now much easier to build ML applications than it was even a few years ago. With the rapid development of new open source toolkits, we're truly seeing a rapid commoditization of the technology:
2010-2014: a new deep learning toolkit is released every 47 days. 2015: every 22 days. tensorflow & caffe top github pic.twitter.com/fJ7JESxlip
— Kyle McDonald (@kcimc) November 10, 2015
This helps match the rapid growth of the field:
Publication dates of almost 15000 Machine Learning conference papers scraped from IEEExplore [1]
The Ruby on Rails of ML is toolkits like Tensorflow, Caffe, Theano, and convnetjs. I recently worked with a friend on setting up a TF development environment on an AWS EC2 instance, and the process was a breeze. No need to build your own neural net from scratch anymore!
Recently, Makoto Koike, an embedded systems engineer in Japan, noticed that his parents spent a lot of time sorting and categorizing cucumbers on their farm. The process was just as complicated as growing the vegetable itself. He wanted to automate this process to save his parents from the added manual labor. Although he had limited computer vision background, he used Tensorflow, OpenCV tutorials, and a hardware + camera setup to automatically detect the quality and size of cucumbers grown on the farm - to a relatively high degree of accuracy. Fascinating case study.
Obviously for more complex needs, you'll need a deep knowledge of the technology and will need to implement most special cases yourself, but such is the case with web applications as well. Tensorflow still, however, covers a wide variety of general use cases. Even experts in the field will use technology like Tensorflow for prototyping efforts.
Back to Ilya's original tweet, I think there's an opportunity for a startup that liberates basic development with ML. Parse was a great product because it abstracted away the rough edges of building mobile backends. This precise model can be transferred to AI applications:
Plan: 3-4 well-scoped features (e.g. "what is this text blob about?") Make them dead simple, narrow, and decoupled from rest of ecosystem.
— Ilya Sukhar (@ilyasu) October 5, 2016
You can fight the commodity infra/analytics/push battle or woo developers with something that actually enables them to make next-gen apps.
— Ilya Sukhar (@ilyasu) October 5, 2016
A good example of a company doing this is Clarifai. They make a dead simple image/video recognition API - and it works really well. I imagine something like this for a few more use cases - categorizing text, voice recognition, intent creation and fulfillment, etc. It's what Shivon Zillis likes to call "'Alchemists' Promising To Turn Your Data Into Gold". Possibilities are endless. Shoot me a message if you're working on this - I'd love to try it out.
______________
[1] https://www.reddit.com/r/dataisbeautiful/comments/4kjivw/publication_dates_of_almost_15000_machine/
Thanks to Ritwik for looking this over. I can be reached @niraj on twitter or by email.
]]>it can apply those resources [from Uber China - Didi deal] to technologies “up the stack” for a world in which your Ubers are autonomous — that could be pods or cars, sensors, robotics, mapping technologies, deep learning, and a host of other requirements to make a fully-integrated self-driving network a reality. With 80% of each fare you pay going to your driver, the company has a huge incentive to bite into that for its next big meal. [1]
Paper |
Core |
Carousel |
Drive |
||||
Browser (Chrome) |
||||
OS (ChromeOS) |
||||
Computer (Chromebook) |
||||
Internet (Fiber) |
You could take this from a physics perspective and think of the stack as a hill. As you get to the top of a hill, potential energy increases, and kinetic energy starts to decrease. After you pass the monument of building lower in the stack (at the bottom of the hill), you’ve built up enough potential energy to build horizontally from there, since the previous infrastructure already exists. When Uber created UberEATS, a lot of work was already done, since a network of drivers already existed on the road (rather than having to build from scratch).
Wiley and I recently recorded a podcast on this phenomena where we explain the idea in more detail + clarity:
]]>