Flash Boys, Michael Lewis

After reading Liar’s Poker, I was in NYC bookstore and I couldn’t resist to buy this book.

I’m not an expert on the field, even I do not consider myself an amateur, just a surprised citizen that was around.

I knew what was HFT, the importance of latency, that the market volume was high for HFTs…

But concepts as dark pools, running-front,…. have been a great discovery for me. And anecdotes about real things that have happened to the different flash boys, is some of my prefer parts of the book.


Prestashop cache, disable it!

This is the summary, I had to disable the cache from Prestashop.

The cache mechanism from Prestashop is a different thing from the CDN. To avoid confusion, I have activated a CDN and this works fine delivering the static content of the store (images, js, css, etc).

The  issue is the internal cache system from Prestashop.

I though that the use of this cache was something good, it should be beneficial, isn’t it? In my case, it is not the case.

This cache system uses a lot of resources (mainly CPU) to generate and update the cache. When you disable it, every single request is done through the PHP web server and the database, but the CPU and memory used is less in the majority of the scenarios than having the cache activated.

CPU_memory_comparative_prestashop Other important thing is to disable the modules that you really do not use. I read that in a blog and I thought “this is so basic suggestion!!” but 5 minutes later I had disabled 6 modules.

This is being a hard job to enable and disable modules, test the environment and track the time response, but at this stage I have not other option 🙁

Web time response on WordPress

In April I started to take care of the time response of the WordPress blog.

After adding wp-cache, CDN, and check the quality of the code, the results are:

1.- Yslow grade = A, overall performance score = 96

2.- Load Impact: it’s not perfect, but goes from 7.5s to 5s average, and only 2 high peaks.

LoadImpact_joapen 3.- WebPagetest: this one comes from 72/100 to 93/100, First Byte Time is still bad, compress images has decreased and Cache static content has improved.

WebPage_test_joapenThe job is not perfect but for me, now it’s time to go for Prestashop!



What is the CPU assigned?

When you look for a Web hosting you will see there are so many features that are clear, for instance:

  • We offer unlimited memory size.
  • We offer unlimited band width
  • etc.

But rarely they will mention the CPU assigned to your virtual and shared environment and the way the CPU is assigned to it.

Ask for it!!


How to Monetize Application Technical Debt

Look into internet a report from Gartner and Cast Software named “How to Monetize Application Technical Debt”.

They expose a simplified formula for the measure on the individual of a given application based in the number of violations per thousands of lines of code (violations per KLOC).

The formula proposed for Technical Debt is:

  • L = Number of Low-Severity Violations per KLOC
  • M = Number of Medium-Severity Violations per KLOC
  • H = Number of High-Severity Violations per KLOC
  • S = Average Application Size (KLOC)
  • C = Cost to Fix a Violation ($ per Hour)
  • T = Time to Fix a Violation (Number of Hours)

Technical Debt per Application = [(10% * L) + (20% * M) + (50% * H)] * C * T * S

They do not forget the business value as important element of the application value, and they add a conceptual graph where they visualize the application Technical Debt and the Business Value as a Function of Structural Quality Violations.

Application Technical Debt and Business Value as a Function of Structural Quality Violations (Conceptual)

Logistics, 3PL & 4PL

I’m working on a logistics transformation, and it is letting me know the situation of the market with respect the logistics business, where all come from and where all go…


Nowadays so many companies outsource part of the processes, typically IT and keep the management activities inside the company. The main question done before to move to 4PL is: who needs to be in control of the supply chain, and who can manage it better?

  • In 3PL model, logistics is considered a core process and it’s managed by the company.
  • In 4PL model, logistics is considered a key process and it’s managed by a third party.

A 4PL functions more like an internal logistics organization, whereas a 3PL may focus on specific logistics activities. Think of a 4PL as an orchestra conductor, working to bring together the best-in-class components of a world-class supply chain. It means in some way to be a part of the company’s supply chain, not a vendor in the list.

The 4PL company defines a strategy and implements it.

This is a big change for manufacturing companies, which have had logistics always as one of the core processes in their organizations.

A 4PL has mainly three key goals:

  • Driving the spend of improvement for the organization.
  • Increasing the supply chain competency.
  • Waste elimination.

This trend continues evolving, there are so many 4PL organizations that are able to demonstrate the value they provide and the competitive advantage they offer to the manufacturing companies.

Top performer and Plan B people

In the organizations, the succession plans are not always planned and there are so many funny situations when a person needs to replace someone. The option basically can be that you assign a top performer person who will lead and execute the work in an extraordinary way; or your option is to assign someone who is around, because you need a quick replacement, you don’t care, or any other reason.

When someone replaces to another person, the first question it comes to my mind is: is guy a top performer ? or is he a plan B? The second question is: where this guy come from?

On all of this is also relevant to understand who is the person taking the decision, let’s call it the boss.

It’s important to understand this, because it will enable you to understand some part of his/her behaviors and the network where he moves. You also need to understand that the other person will do a similar exercise.

In organizations where there is stability and there is a clear leader (the boss), he sometimes will cover some vacancies with “plan B” profiles, they do not need a top performer in every single position, they are not willing to pay for a “top performer” and they will want to control the decisions on that area (a plan B will require guidance and the leader loves this type of relationship).

In organizations where there are so many moves, the situation can be the opposite, the boss requires top performers to change the direction of the things and push the organization to the goals.

There are people who are always top performers, and they push their careers with a clear vision about where they want to go and they do it. To be a top performer is not only to achieve the results, it’s they are able to sell themselves very well and explain their abilities.

In general a plan B is not recognized to be a good thing, and it’s true, but you have to take care. For me, to be a plan B it is not intrinsically a negative thing. I have seen people who is able to have a success career moving in the organizations as a permanent “plan B”, making themselves irreplaceable, and making their personal value to be quoted always high; as consequence of all of this, they are successful. In this situations they play with an additional advantage: the other people look them as a “plan B”, and they underestimate they capacity to perform the role.

During your professional career it could be the case that sometimes you are a top performer, and some others you are a plan B. In my case I have been both things.

What about you?   are you a “top performer” or a “plan B”?

Application TCO dilema

Business Applications are the primary mechanism for delivering IT value. Applications are expensive to purchase, but even more expensive to maintain.

You have strategic plans, balance sheets, portfolio management cycles…and budget reviews.

Application TCO is one of the recognized mechanisms to manage the IT budget, and it’s also one of the major headaches for CIOs and IT departments.

Challenging questions:

  • How do you evaluate the TCO of your portfolio?
  • Do you have a classification of applications by type of TCO calculation?
  • How do you measure the ROI of an application?
  • How many years is your applications life-cycle analysis?
  • Which depreciation methods you apply?
  • What costs can be associated with software development over a 3, 5 or 10-year maintenance cycle?

To keep up to date the data related to an applications portfolio is so much difficult, to have clear the TCO of each one of its assets is a nightmare.

I have found that one of the common strategies of the IT organizations is to create the legacy department that takes care of all the applications that are not valuable from the IT point of view and that continues evolving there forever; making the OPEX to be prolonged for years. TCO is measured at high level, but not in detail.

It’s true you have to rationalize and concentrate on the more important assets of your portfolio, but take care with this strategy because the speed of transformation and new amount of “legacy” applications grows more and more, and suddenly you will find that a big % of your portfolio is “legacy”.

Then, after some years in this situation, the only way to handle it is with a transformation program, let’s call it: “Remove legacy applications program”.

If we look in other areas of IT (for instance platforms or end user computing) they all work with very detailed list of assets and with a rigorous understanding of life-cycle, TCO, TCO forecast… the same organizations than have a mess on applications.

Manage requirements

Keep the requirements under control is sometimes very difficult. Continuous changes on specifications, continues changes of direction, quick estimations demanded under weak specifications, etc…
To make the customer to recognize that all changes they are putting in place is even hardest. They just do not care, you are there to handle and be paid by doing it.
I like this part of the deals, it enables me to understand their business in deep, the way the work internally, and gain trust.
frozen_requirementsWalking on waterand developingsoftwareonsomerequirementsis easy if both arefrozen

SAP and their cloud journey

Some years ago SAP announced the creation of SAP Business By Design (BBD). This solution was created to gather the ERP needs for mid-size companies and be one piece of puzzle on the journey to move the SAP services to cloud based solutions (other are: successfactors for HR, Ariba for procurement).

The relevancy of BBD in the market has been low, and SAP was losing the race to the cloud services in this area.

Then the strategy to reach the market changed. After so many discussions about the pricing models to offer SAP on Amazon Web Services, now you can acquire SAP on the AWS Market Place.

The market target is different, but it’s relevant how the synergies between infrastructure and applications (now cloud services) are established.

From the position point of view, it’s interesting to highlight the way SAP is sold. In the past SAP decided the pricing models and the way to acquire the software; now in the AWS marketplace, the rules are defined by Amazon: price per hour.