Cryptoasset Valuations


I have had some conversations about the maturity of the cryptocurrencies market related to the maturity of the companies and how I miss to have the fundamental indicators as you can have for a public company (Revenue, Costs, P/E, balance sheets…).

The companies publishing cyptocurrencies are private companies and they do not have  obligation to share this information, but there should be in the future a general consensus about how to valuate their assets, the coins they put in the market.

As everybody can easily figure out, I always has stated that there will be a moment where so many cryptocurrencies will disappear as they do not make sense, and others will survive as they make sense and provide value.

Well, I would like to understand how to valuate them (better before they disappear).

Value… where? on the transaction.

The basis

The basic value of a cryptoasset is the value provided to a given transaction. If you perform a transaction via the traditional or existing channel this has a cost, a speed of execution and a given security.

If you execute that transaction through a crytocurrency, is it cheaper? faster? more secure?…

On top of that, the volume of transaction:

  • how many of these transactions are happening in the market?
  • how many of these transactions are managed through this blockchain environment?
  • What are the efforts of the company to attract more transactions to their ecosystem?

Cryptoasset Valuations

I’m a beginner on blockchain and on asset valuations, so I have look for the experts. I found this article written by  Chris Burniske, where he explains how to do the valuation (he attaches a nice excel that helps a lot to understand the details).

Each cryptoasset serves as a currency in the protocol economy it supports. Since the equation of exchange is used to understand the flow of money needed to support an economy, it becomes a cornerstone to cryptoasset valuations.

The equation of exchange is MV = PQ, and when applied to crypto my interpretation is:

  • M = size of the asset base.
  • V = velocity of the asset (shows the number of times an asset changes hands in a given time period).
  • P = price of the digital resource being provisioned (price of the cryptocurrency or crytoasset).
  • Q = quantity of the digital resource being provisioned

A cryptoasset valuation is largely comprised of solving for M, where M = PQ / V. M is the size of the monetary base necessary to support a cryptoeconomy of size PQ, at velocity V.

Next steps

  • I need to understand better the calculation of the market value.
  • Go deep on a real ICO and perform a valuation.
  • Understand how to do a token utility calculation.

OODA loop by John Boyd

Reading the chapter “Better for Less” of Wardley Maps I found a new concept to me: OODA loop.

It’s a strategy cycle that stands for:

Observe the environment, Orient around it, Decide your path and Act

The creator of this concept was John Boyd  , which Wikipedia page is really interesting to read, the basis are represented in the diagram shown below:

Before run, just walk

To put in place a strategy cycle or continuous mindset is not easy and Simon’s suggestion is:

  1. Start with Just do it,
  2. then jump into Plan, Do, Check, Act (SixSigma),
  3. finnally go to Observe, orient, decide and act.

Time is the dominant parameter

Think about an aircraft pilot who goes through the OODA cycle in the shortest time prevails because his opponent is caught responding to situations that have already changed.

Think about continuous decisions that needs to be done, continous changes on different aspects of the environment and the need to overcome them.

A basic squeme you are able to put in practice and repeat is something that can be useful, not just for an individual but for a whole organization that is able to adapt itself as a whole.

Cryptocurrencies scams

The cryptocurrencies have a side B, the scams. There are so many, and we cannot ignore them. Ponzi schemes, false roadmaps, …

Shit happens, but with a proper due dilligence you can minimize the risks.

Basic due dilligence (basic questions)

The general consensus about due dilligence is: do a standard due diligence as if it is a traditional company, plus check specific questions from the blockchain environment. Things such:

  • Exaggerated returns percentages.
  • Not clear or very complicated transaction fees.
  • Who are the owners? (yes, some of the ICOS I have read they do not add any name, incredible isn’t it?)
  • Do they have a clear roadmap?
  • Are they achieving the defined roadmap?
  • What is the implemented token model?
  • What is the value of the cryptoasset in the value chain of economy?
  • How the crypto asset is evaluated?
  • Is the consensus model a trusted one?

I will list the ones that I know that happened, ordered by scammed amount:

  1. Pincoin token: A Vietnamese cryptocurrency company Modern Tech. The team disappeared after scamming around 660m$ (April/2018)

A complete blacklist of scams and potential scams being evaluated:


Corda is an open source blockchain project, designed for business from the start. Created in 2016 by the R3 consortium of financial institutions.

Key features

  • No unnecessary global sharing of data: only parties with a legitimate need to know can see the data within an agreement.
  • It choreographs workflow between firms without a central controller.
  • Corda achieves consensus at the level of individual deals between firms, not at the level of the system.
  • The design directly enables supervisory and regulatory observer nodes.
  • Transactions are validated by the parties to the transaction rather than a broader pool of unrelated validators.
  • A variety of consensus mechanisms are supported.
  • It records an explicit link between smart contract code and human-language legal documents.
  • It’s built on industry-standard tools.
  • There is has no native cryptocurrency.


This is a new concept for me; it is basically an entity or organization that is an operator of Corda nodes. We are in a financial environment, so Oracles can provide information such interest rates, exchange rates or any other information that forms a component of a contract.

Oracles operate in a commercial manner that assures they can receive payment for their services.

Oracle providers can deploy their Oracle services into one interoperability zone and service all business networks within that zone.

Commercial offer or Opensource offer

If you want to compete in the payments word, you will better go to the commercial solution offered by R3 consortium.

Quantitative trading on cryptocurrency market Q3

This is the second chapter of a learning process that started last September.

Third Quarter

The third step is defined for the next 3 months, where the main goal is to define a specific strategy of quantitative trading and work on it with real money on crypto currency market.

Following the V2MOM model:

  • Vision: Have a strategy running in crypto currency market running not with a period of 2 – 3 hours, but some days (stop operating at 3m).
  • Values: have fun, learn a lot, build a team with Dani, do practices and more practices.
  • Method: learn about trading basis, do backtesting with Quantopian on stocks or Forex (analyze the results in deep).
  • Obstacles: Time.
  • Measures:
    • Make short/long decisions based on 1 hour.
    • Read at least 1 book of trading.
    • Perform backtesting with Quantopian and document the results and findings.
    • Improve and document the “mode operations” and “mode backtesting”.

Death line = June 2018

Results (July 1st, 2018)

  • Time to be accountable, let’s go…

Keep track of space

how many thousand things you can see here. It’s amaizing

Created by Theodore Kruczek to help visualize orbital calculations for application to Ground Based Radars and Optical Telescopes. Based on the original work of James Yoder.

All information is open source publically available information.

  • Orbits are derived from TLEs found on public websites.
  • Payload information is compiled from various public sources.
  • US Sensor data was derived from MDA reports on enviromental impacts and congressional budget reports.


Support vector machine (SVM)

The basis

  • Support vector machine (SVM) is a supervised learning method.
  • It can be used for regression or classification purposes.
  • SVM is useful for hyper-text categorization, classification of images, recognition of characters…
  • The basic visual idea is the creation of planes (lines) to separate features.
  • The position of these planes can be adjusted to maximized the margins of the separation of features.
  • How we determine which plane is the best? well, it’s done using support vectors.

Support vectors

The support vectors are the dots that determine the planes. The orange and blue dots generate lines and in the middle you can find what is called the hyper-plane.

The minimum distance between the lines created by the support vectors is called the margin.

The diagram above represents the more simple support vector draw you can find, from here you can make it more complex

Machine learning, source of errors

Before to start

What is an error?

Observation prediction error = Target – Prediction = Bias + Variance + Noise

The main sources of errors are

  • Bias and Variability (variance).
  • Underfitting or overfitting.
  • Underclustering or overclustering.
  • Improper validation (after the training). It could be that comes from the wrong validation set. It is important to divide completely the training and validation processes to minimize this error, and document assumptions in detail.


This phenomenon happens when we have low variance and high bias.

This happens typically when we have too few features and the final model we have is too simple.

How can I prevent underfitting?

  • Increase the number of features and hence the model complexity.
  • If you are using a PCA, it applies a dimension reduction, so the step should be to unapply this dimension reduction.
  • Perform cross-validation.


This phenomenon happens when we have high variance and low bias.

This happens typically when we have too many features and the final model we have is too complex.

How can I prevent overfitting?

  • Decrease the number of features and hence the complexity of the model.
  • Perform a dimension reduction (PCA)
  • Perform cross-validation.

Cross validation

This is one of the typical methods to reduce the appareance of errors on a machine learning solution. It consist on testing the model in many different contexts.

You have to be careful when re-testing model on the same training/test sets, the reason? this often leads you to underfitting or overfitting errors.

The cross-validation tries to mitigate these behaviors.

The typical way to enable cross validation is to divide the data set in different sections, so you use 1 for testing, and the others for validations. For instance, you can take a stock data set from 2010 to 2017, use the data from 2012 as testing dataset and use the other divisions by year for validation of your trading model.

Neural networks

They can be used to avoid that errors are backpropagated. The neural network helps you to minimize the error by adjusting the impact of the accumulation of data.


k-means clustering

The basis

  • K-means clustering is an unsupervised learning method.
  • The aim is to find clusters and the CentroIDs that can potentially identify the
  • What is a cluster? a set of data points grouped in the same category.
  • What is a CentroID? center or average of a given cluster.
  • What is “k”? the number of CentroIDs

Typical questions you will face

  • The number k is not always known upfront.
  • First look for the number of CentroIDs, then find the k-values (separate the 2 problems).
  • Is our data “clusterable” or it is homogeneous?
  • How can I determine if the dataset can be clustered?
  • Can we measure its clustering tendency?

Visual example

A real situation: identification of geographical clusters

You can create a K-means algorithm, where the distance is used to know the similarities or dissimilarities. Any pointers how a properties of observations are mapped so that you can decide the groups based on the K-means or hierarchical clustering.

Since k-means tries to group based solely on euclidean distance between objects you will get back clusters of locations that are close to each other.

To find the optimal number of clusters you can try making an ‘elbow’ plot of the within group sum of square distance.


Naive Bayes classification

The basis

  • It’s based on Bayes’ theorem (check the wikipedia link, and see how complex the decision trees could be).
  • Assumes predictors contribute independently to the classification.
  • Works well in supervised learning problems.
  • Works with continuous and discrete data.
  • Can work with discrete data sets.
  • It is not sensitive to non-correlating “predictors”.

Naives Bayes plot

Example: spam/ham classification