Skip to main content

Proof of Social Network Identity: a proof of concept

One of the main criticisms to blockchain technology is based on the huge consumption of electricity required by the biggest networks (e.g., Bitcoin and Ethereum).
That huge electric power consumption  comes from using Proof of Work (PoW) algorithms to prevent "double spending" which is closely related to the "51% attack".

Some other types of algorithm have been developed as an alternative to PoW, the more popular one being Proof of Stake (PoS), which basically implies that individuals with higher stake (i.e., more crypto money) have more power (i.e., are more trusted) than the rest.
Hence, PoS, creates an entry barrier for those who want to become members of the blockchain network, since one needs to own a stake in order to join the network.

The key to prevent "double spending / 51% attacks" is to use something in the block validation mechanism which is complicated or expensive enough for some malicious actor to replicate or to clone or to create (whatever you call it) in order to gain control of the network.

In PoW that something is computer power (which requires electricity power)
In PoS that something is a cryptocurrency.  

But maybe there is something else which one could use to create a secure-enough blockchain network, even if it meant compromising on some other feature of the current blockchain networks. 

Let me elaborate a little, I read somewhere that the main features of the blockchain are:
- Integrity (via hashing)
- Authenticity (via signing)
- Confidentiality (via encryption)

Those three points above are an oversimplification, but it will work for the purpose of this post.
I would split down the last point in the following two points:
- Confidentiality of the one who submits a transaction 
- Confidentiality of the one who validates a transaction (a.k.a. miner)

Arguably, the first type of confidentiality is much more important than the second one.
In fact, it is currently quite easy to identify who are the validators (a.k.a. miners in PoW) in Bitcoin for example, simply because mining requires such a huge investment in computers and electricity that miners stand out and are easy to find, and also there are publicly-advertised mining pools whose solely purpose is to share the mining costs.
So in general, people do not really care about the validators' identity.

The identity of who submits a transaction, or more broadly, the identity of anyone who owns a blockchain address is a big issue though, and it should not be possible to obtain that information from the blockchain network.

Now this is my proposal: what if by giving away the current not-so-effective protection of the validators identities we could implement a new type of algorithm which would not require huge computing power and neither the ownership of any stake in order to participate in a blockchain network? 

It occurred to me that we could use social networks (Twitter, Facebook, LinkedIn, StackOverflow, Github, etc...) for this.
More specifically, we could use the verified  social networks accounts together with the power of the Open Web APIs which most of them provide in order to prevent double spending and 51% attacks.

Think about it, the social networks have made a lot of progress in order to prevent abuse and misuse.
Of course you can still create fake accounts in social networks, but in order to be verified you would need more than just an email account, you will also need a working mobile telephone number.
And this new type of algorithm (let's call it Proof of Social Network Identity, since just Proof of Identity has been used in the past to refer to other stuff) can implement tougher requirements in order to increase the security, for example:
"In order to be a validator, you need to own a Twitter account which is more than a year old, and it must have 10 or more followers"

Another example:
"In order to be a validator, you need to own a Twitter account which is more than a year old, and it has 10 or more direct followers and 50 or more secondary followers (followers of your followers) and you must also own a Stackoverflow account with more than 10 points of reputation"

Social network accounts which meet those requirements are not easy to obtain nor fake.
And if the blockchain network is big enough, one malicious actor would need to be able to create thousands of them to be able to succeed or alternatively hack into and take control of Twitter, Facebook, etc...

So the possibilities are endless. What requirements to choose? Well, it is a compromise between ease of access to the network versus security of the network...
One could just start with low requirements and then while using the network (through experimentation and trial and error) those requirements may be increased if deemed necessary.

Since I am a hands-on person, I decided to build a proof of concept based on this new Proof of Social Network Identity (PoSNI) and I called it  Social Ledger.

I did not build it from scratch though, one of the main beauties of open source code is that anyone can reuse an existing project, and that is exactly what I have done.

Initially I thought of forking the Go Ethereum client repository, but since I have no experience with Go programming language, then I decided to use Harmony (which is in turn based on EthereumJ) which is an implementation of the Ethereum client in Java.

Ok, so how does it work?
To start the validator (or miner, even though we do not really mine anything), you need to have a Twitter account with an app associated to it.
If you do not know how to create an app in Twitter, you may follow the instructions in the following Youtube video.

Whenever a new block containing new transactions is created, the Twitter username of the miner node which is creating that block is included in one of the block fields (the extradata field), and also, the hashcode of that block is tweeted by that same Twitter account using the Twitter public Web API. Then the block is broadcasted to the network.

Whenever a validator receives a block for the network, on top of all the regular Ethereum-standard checks, it will:
1.- Get the Twitter username of the node which created/mined that block from the extradata block field value.
2.- Look for a tweet containing the hashcode of the block whose author is that same Twitter account

Such tweets look like these ones:

Also, the same miner (i.e., the same Twitter account) is not allowed to mine/create two blocks in a row.
And if there are two competing blocks, the one whose Twitter account has been longer without creating a block, will have preference.

This is quite a lot of information to digest, so I would recommend you to just download the Social Ledger distribution zip file, install it, configure it and run it (so far only on Linux computers, I have not tested it on Mac and to make it work on Windows it would require to write the equivalent Windows start script) as per the instructions in this post or this Youtube video.

I have an Amazon Lightsail VM running 24x7 so your local node should be able to join the network.
The testing which I have performed so far is minimal (just three nodes) so I am not sure what will happen once more nodes join the network but if I see there is enough interest, I may work on make the current implementation more robust and escalable or I may create a new implementation of the Proof of Social Network Identity (PoSNI) algorithm using a more flexible blockchain platform such as Hyperledger or Corda (or something else).


Popular posts from this blog

Kafka + WebSockets + Angular: event-driven microservices all the way to the frontend

In the the initial post of the  Event-driven microservices with Kafka series (see here  or  here ), I talked about the advantages of using event-driven communication and Kafka to implement stateful microservices instead of the standard stateless RESTful ones. I also presented the architecture and the source code of a related proof of concept application. In this post, I would like to show how to extend the asynchronous event-driven communication all the way from Kafka to the Web frontend passing through the Java backend. Hence, in the first post of this series, we got rid of HTTP as the communication protocol among microservices in the backend, and now we are also replacing it (with WebSockets) as the communication protocol between the frontend and the backend. Ok, but why would you do that? Because it provides a better experience to the end user!. Using WebSockets you can build legit  real-time user interfaces, the updates are pushed immediately from the server to the client

A Java dev journey to full-stack: first chapter

The Motivation I am an experienced Java developer and (surprise!) I like Java. I know it is not perfect but it works just fine for me (I enjoy type-safety and I do not consider verbosity a disadvantage, quite the opposite). I also know that some people dislike Java, which is also fine. But recently I decided to step out of my confort zone as developer, my goal isn't to be one of the "cool kids" neither trying to monetize a new skill in the job market. I have a quite practical motivation: I want to be able to build more (different) stuff. That's exactly the same reason why I learnt Android development by myself a couple of years ago. Web applications are ubiquitous, even more than native mobile apps, and thanks to cloud computing, one can easily and inexpensively release their idea/app to the World Wide Web. I already did some Web development in the past, in the bad old days of JSP and JSF, but the process was slow and painful. Nowadays the Web landscape h

Using Apache Kafka to implement event-driven microservices

When talking about microservices architecture, most people think of a network of stateless services which communicate through HTTP (one may call it RESTful or not, depending on how much of a nitpicker one is). But there is another way, which may be more suitable depending on the use case at hand. I am talking about event-driven microservices, where in addition to the classic request-response pattern, services publish messages which represent events (facts) and subscribe to topics (or queues depending on the terminology used) to receive events/messages. To fully understand and embrace this new software design paradigm is not straight-forward but it is totally worth it (at least looking into it). There are several interconnected concepts which need to be explored in order to discover the advantages of event-driven design and the evolutionary path which led to it, for example: Log (including log-structured storage engine and write-ahead log) Materialized View Event Sourcing C o