Reading the chapter “Better for Less” of Wardley Maps I found a new concept to me: OODA loop.
It’s a strategy cycle that stands for:
Observe the environment, Orient around it,
Decide your path and Act
The creator of this concept was John Boyd , which Wikipedia page is really interesting to read, the basis are represented in the diagram shown below:
Before run, just walk
To put in place a strategy cycle or continuous mindset is not easy and Simon’s suggestion is:
- Start with Just do it,
- then jump into Plan, Do, Check, Act (SixSigma),
- finnally go to Observe, orient, decide and act.
Time is the dominant parameter
Think about an aircraft pilot who goes through the OODA cycle in the shortest time prevails because his opponent is caught responding to situations that have already changed.
Think about continuous decisions that needs to be done, continous changes on different aspects of the environment and the need to overcome them.
A basic squeme you are able to put in practice and repeat is something that can be useful, not just for an individual but for a whole organization that is able to adapt itself as a whole.
The cryptocurrencies have a side B, the scams. There are so many, and we cannot ignore them. Ponzi schemes, false roadmaps, …
Shit happens, but with a proper due dilligence you can minimize the risks.
Basic due dilligence (basic questions)
The general consensus about due dilligence is: do a standard due diligence as if it is a traditional company, plus check specific questions from the blockchain environment. Things such:
- Exaggerated returns percentages.
- Not clear or very complicated transaction fees.
- Who are the owners? (yes, some of the ICOS I have read they do not add any name, incredible isn’t it?)
- Do they have a clear roadmap?
- Are they achieving the defined roadmap?
- What is the implemented token model?
- What is the value of the cryptoasset in the value chain of economy?
- How the crypto asset is evaluated?
- Is the consensus model a trusted one?
I will list the ones that I know that happened, ordered by scammed amount:
- Pincoin token: A Vietnamese cryptocurrency company Modern Tech. The team disappeared after scamming around 660m$ (April/2018)
A complete blacklist of scams and potential scams being evaluated: https://www.scambitcoin.com/blacklist/
Corda is an open source blockchain project, designed for business from the start.
Created in 2016 by the R3 consortium of financial institutions.
- No unnecessary global sharing of data: only parties with a legitimate need to know can see the data within an agreement.
- It choreographs workflow between firms without a central controller.
- Corda achieves consensus at the level of individual deals between firms, not at the level of the system.
The design directly enables supervisory and regulatory observer nodes.
- Transactions are validated by the parties to the transaction rather than a broader pool of unrelated validators.
- A variety of consensus mechanisms are supported.
- It records an explicit link between smart contract code and human-language legal documents.
- It’s built on industry-standard tools.
- There is has no native cryptocurrency.
This is a new concept for me; it is basically an entity or organization that is an operator of Corda nodes. We are in a financial environment, so Oracles can provide information such interest rates, exchange rates or any other information that forms a component of a contract.
Oracles operate in a commercial manner that assures they can receive payment for their services.
Oracle providers can deploy their Oracle services into one interoperability zone and service all business networks within that zone.
Commercial offer or Opensource offer
If you want to compete in the payments word, you will better go to the commercial solution offered by R3 consortium.
This is the second chapter of a learning process that started last September.
The third step is defined for the next 3 months, where the main goal is to define a specific strategy of quantitative trading and work on it with real money on crypto currency market.
Following the V2MOM model:
- Vision: Have a strategy running in crypto currency market running not with a period of 2 – 3 hours, but some days (stop operating at 3m).
- Values: have fun, learn a lot, build a team with Dani, do practices and more practices.
- Method: learn about trading basis, do backtesting with Quantopian on stocks or Forex (analyze the results in deep).
- Obstacles: Time.
- Make short/long decisions based on 1 hour.
- Read at least 1 book of trading.
- Perform backtesting with Quantopian and document the results and findings.
- Improve and document the “mode operations” and “mode backtesting”.
Death line = June 2018
Results (July 1st, 2018)
- Time to be accountable, let’s go…
how many thousand things you can see here. It’s amaizing
Created by Theodore Kruczek to help visualize orbital calculations for application to Ground Based Radars and Optical Telescopes. Based on the original work of James Yoder.
All information is open source publically available information.
- Orbits are derived from TLEs found on public websites.
- Payload information is compiled from various public sources.
- US Sensor data was derived from MDA reports on enviromental impacts and congressional budget reports.
- Support vector machine (SVM) is a supervised learning method.
- It can be used for regression or classification purposes.
- SVM is useful for hyper-text categorization, classification of images, recognition of characters…
- The basic visual idea is the creation of planes (lines) to separate features.
- The position of these planes can be adjusted to maximized the margins of the separation of features.
- How we determine which plane is the best? well, it’s done using support vectors.
The support vectors are the dots that determine the planes. The orange and blue dots generate lines and in the middle you can find what is called the hyper-plane.
The minimum distance between the lines created by the support vectors is called the margin.
The diagram above represents the more simple support vector draw you can find, from here you can make it more complex
Before to start
What is an error?
Observation prediction error = Target – Prediction = Bias + Variance + Noise
The main sources of errors are
- Bias and Variability (variance).
- Underfitting or overfitting.
- Underclustering or overclustering.
- Improper validation (after the training). It could be that comes from the wrong validation set. It is important to divide completely the training and validation processes to minimize this error, and document assumptions in detail.
This phenomenon happens when we have low variance and high bias.
This happens typically when we have too few features and the final model we have is too simple.
How can I prevent underfitting?
- Increase the number of features and hence the model complexity.
- If you are using a PCA, it applies a dimension reduction, so the step should be to unapply this dimension reduction.
- Perform cross-validation.
This phenomenon happens when we have high variance and low bias.
This happens typically when we have too many features and the final model we have is too complex.
How can I prevent overfitting?
- Decrease the number of features and hence the complexity of the model.
- Perform a dimension reduction (PCA)
- Perform cross-validation.
This is one of the typical methods to reduce the appareance of errors on a machine learning solution. It consist on testing the model in many different contexts.
You have to be careful when re-testing model on the same training/test sets, the reason? this often leads you to underfitting or overfitting errors.
The cross-validation tries to mitigate these behaviors.
The typical way to enable cross validation is to divide the data set in different sections, so you use 1 for testing, and the others for validations. For instance, you can take a stock data set from 2010 to 2017, use the data from 2012 as testing dataset and use the other divisions by year for validation of your trading model.
They can be used to avoid that errors are backpropagated. The neural network helps you to minimize the error by adjusting the impact of the accumulation of data.
- K-means clustering is an unsupervised learning method.
- The aim is to find clusters and the CentroIDs that can potentially identify the
- What is a cluster? a set of data points grouped in the same category.
- What is a CentroID? center or average of a given cluster.
- What is “k”? the number of CentroIDs
Typical questions you will face
- The number k is not always known upfront.
- First look for the number of CentroIDs, then find the k-values (separate the 2 problems).
- Is our data “clusterable” or it is homogeneous?
- How can I determine if the dataset can be clustered?
- Can we measure its clustering tendency?
A real situation: identification of geographical clusters
You can create a K-means algorithm, where the distance is used to know the similarities or dissimilarities. Any pointers how a properties of observations are mapped so that you can decide the groups based on the K-means or hierarchical clustering.
Since k-means tries to group based solely on euclidean distance between objects you will get back clusters of locations that are close to each other.
To find the optimal number of clusters you can try making an ‘elbow’ plot of the within group sum of square distance.
- Logistics or Logit regression.
- It’s a regression model where the dependent variable (DV) is categorical.
- Outcome follows a Bernoulli distribution.
- Success = 1 , failure = 0.
- Natural logarithm of odds ratio ln (p/1-p)… logit(p).
- Inverse log curve gives a nice “s” curve to work with.
- Equate logarithm of odds ratio with regression line equation.
- Solve for probability p
Continue learning the basis !