Getting your Trinity Audio player ready... |
This post originally appeared on Medium, and we republished with permission from Roger Taylor. Read the full piece here.
We are in the very early days of Bitcoin adoption and it is not clear at this point what the long term adoption will look like. One aspect of this can be seen in the blockchain services that application developers require to build their applications on.
The legacy approach
When Craig left Bitcoin to it’s own devices, development went in an impractical direction because of the technological appeal of that direction to developers. Because block size was small it was possible to index the complete contents of the blockchain, this meant that applications could ask an indexing service to tell them about the existence of any transactions featuring payments they thought they might have made.
There were a lot of cool technological things about this approach. You could have seed words and use them to find all the payments you had ever received and from those, find all the payments you had ever made. Not only did this serve as a reliable form of backup, but by using the blockchain as a source of truth you could have the same keys in use on different computers and the applications on those different computers would synchronise to have the same contents. It was and is very user friendly and convenient, but unfortunately it is only possible on blockchains that prevent actual usage and which can only be used for simple payments.
The legacy approach provides a user experience that is an illusion created by locking down the blockchain protocol and preventing both large blocks and free-form transaction creation. If the blocks get larger, then the indexers can no longer be run for free by anyone who is willing to cover the nominal cost. If payments can be freely made without conforming to the limited set of basic templates (represented by addresses) then it no longer becomes possible to search the blockchain for all your sent and received payments. If there are no free indexers and what indexers there are can’t even be used to find all your payments, then there is no magic backup or synchronisation.
The Bitcoin Core developers had to break the Bitcoin protocol to make the legacy approach work. In order to have cool illusions, you have to prevent adoption.
The future approach
Without full chain indexing we now require blockchain services that serve the needs of a restored protocol. It is now commonly accepted that payments should be peer to peer, and someone paying should give the party being paid the payment transaction directly. The party being paid should not monitor the blockchain to receive the payment transaction, and should be the one to broadcast it, not the payer.
What does the future API for blockchain services look like? This comes back to it not being clear what long term adoption looks like. I certainly cannot say for sure, but I have some thoughts on the subject that might be used as a starting point.
Transactions
This can be viewed as the most direct form of full-chain indexing. If you offer a service that allows any broadcast and mined transaction to be retrieved on demand at a later time, then you need to retain all those transactions in a readily accessible fashion. How do you know what the important transactions are? I think it is reasonable to say that 99.9% of them are irrelevant and will never be accessed.
Let’s think through one recent concept, the idea that blockchain can be used for data storage. This was always ill-advised but it was being pushed by some parties, and software and services were developed based on it. It is still considered viable by some people! So the repercussions are that if a blockchain service offers an API to allow requesting any needed transaction then developers and users are going to build on that expecting it to work in the future. And that’s not a limited future, it’s the unknown time in the future when they need any random transaction. If the service locks down that API at a later time, all those developers and users are going to find all their usage now broken or impacted in some way and may not even realise until they go to use whatever makes use of that API.
My prediction: An API that allows unpruned access to transactions is a waste of time and resources. The viable future model of transaction access will be paying a service to retain the transactions you know you may want to access in the future. You can even do this yourself by having a junior developer put them on with cloud storage and have the cloud storage manage replication. The simplest path to obtain transactions you need, is for the other parties with a vested interest in those transaction to give them to you and they have to already have them.
Merkle proofs
I really liked what I understood to be the MAPI model and think it makes a lot of sense. In theory you can get the coinbase transaction for each mined block and from those locate the MAPI services. Then you can get fee quotes from those endpoints, and you can construct payments that meet fee requirements. Then you can broadcast to a miner’s MAPI service knowing the transaction is accepted and get a callback with the merkle proof.
The problem is that you can only get merkle proofs for transactions you broadcast using a MAPI service. It’s not a given that transactions you receive are not already broadcast, or that a party you paid reliably managed to give you the merkle proof they received from the MAPI service they used to broadcast. This means that it is certain that at some point you need to use a blockchain service that can provide you merkle proofs for specific transactions you are interested in.
My prediction: An API to provide arbitrary merkle proofs is valuable and will get used. There is also a strong possibility here that as a developer you should consider whether you should use MAPI rather than following the lead of other developers or educational material and just using it. If you already have to use a blockchain service and get merkle proofs, then you might very well just want to connect to the P2P network and broadcast transactions there and make use of blockchain services to get the merkle proofs.
Script hashes
A script hash is where you know the form a transaction will take and you take the SHA256 hash of an output script. It is used for two legacy purposes. Either you want to register it with an indexing service so that you can detect when someone else broadcasts a payment to that output script. Or you want that indexer to search it’s full chain archive and find all past transactions that feature a payment to that output script. This was the reason that blockchain services used script hashes.
If the reason blockchain services use script hashes was to make the broken protocol model that Bitcoin Core enforced work, then we need to ask ourselves why we would keep using them. ElectrumSV is a wallet that uses the ElectrumX indexing service to detect new payments and restore old payments using script hashes — but it is dropping this completely.
My prediction: This is a dead end. Finding transactions by knowing the full output script is a limited technique, and it is easily replaced by other more flexible approaches. A blockchain service that offers APIs based on script hashes will find itself having wasted time and resources catering for developers who need to up their game and move on from legacy approaches.
One example that might illustrate this is considering how it prevents someone from adding an OP_PUSH/OP_DROP to an output script to tag it with useful data, this might be an identifying mark so that it can be located in even of worst case scenario, then again it might be invoice-related data or anything else you can think of.
Addresses
All legacy payments were made to addresses. A standard payment was made using what is called a P2PKH output script, and the public key hash in that script in the correct location was the address. A multi-signature payment was made using what is called a P2SH output script, and the script hash in that script in the correct location was the address. This enforced a limited accepted set of payment forms, where they had to be these two things. They look random and are hard to compare, and malware exists that replaces them as users cut and paste them from one application to another.
My prediction: These arcane looking addresses as a common and standard concept are something that will fade out and become irrelevant. A blockchain service that offers APIs based on addresses will find itself having wasted time and resources catering for developers who need to up their game and move on from legacy approaches.
Wrapping up
There’s a lot more nuance to this than can be touched on in this article.
There is a huge amount of work to do by both blockchain service developers and blockchain application developers, and it serves us well if we look at what we are using and ask ourselves if the resources involved were spent well.
Blockchain services
What types of application does a blockchain service aim to support? Is it a service that performs operations on behalf of it’s users, like HandCash, MoneyButton or SimplyCash? Is it a P2P application that has no hosted service, and where the user is in sole possession and control of their coins? Is it both? Is it something else? Do they believe that there’s a one size fits all approach?
The last thing a blockchain service business likely wants to do is find out they invested a huge amount of time, resources and effort in building APIs that just aren’t that useful compared to that of a competitors. This could be the difference between them not providing what application developers really need or not.
Application developers
How does a blockchain application developer know they are doing things right? There is a huge amount of Bitcoin Core oriented development resources and even starter code out there. Some of the Bitcoin SV resources probably extrapolate from those resources rather than looking at what they should be doing on Bitcoin SV.
The last thing an application developer likely wants is to find out a blockchain service is dropping or charging more and more for an API they built on, and recommending switching to another API. Something that might have been avoided. Switching away from a bad choice in underlying technology might be the death knell for an application developers efforts, and it might not have even been their fault in adopting that approach. They might have followed commonly recommended approaches or example code, and not have known enough to think through whether they were the best things to use.
Final thoughts
What I would like people to takeaway from this is that I think there is value in analysing these APIs and saying who do they serve and will they continue to be relevant and offer value in the future? Is there another approach that can be taken which is obviously more flexible and likely to be relevant and offer that future value? While there is an element of no-one really knows what form adoption will take, I am fairly confident in my predictions.
At the very least application developers should think through the repercussions of using APIs that have no guarantee of existing into the future, and I certainly cannot see blockchains services ever guaranteeing they will provide them. If you want to own your own data, then you want to know you have access to it and the best way to do it is not to rely on any model that breaks if APIs change or are removed.
Watch: The BSV Global Blockchain Convention panel, The Future World with Blockchain
https://www.youtube.com/watch?v=v9hDGDoy1mM&feature=youtu.be