II Trial Las Palomas, Zahara de la Sierra

El año pasado me quedé sin poder ir a esta carrera, pero este año llegué a tiempo para apuntarme. La organización aprendió de algunos fallos del año pasado y el recorrido estaba muy conseguido: muchísimo sendero por sitios increibles (que sustituyeron al asfalto).

Tuvimos mucha suerte, día soleado y sin viento. Mucho frío como es de esperar un 27 de diciembre.

El recorrido tenía solo 910 metros de desnivel positivo, no los 1400 que marcaban.

Tiempo de la carrera: 3:26:05

Trial-las-palomas

Drupal+Acquia, interesting market penetration

Everybody has listened about WordPress and their evolution as Open Source example of their success in the market. For the consumer to consumer market WordPress is the leader in some of the main areas, no doubts.

I would like to write about a different way to use the open source in the market: Drupal & Acquia. Read the evolution of Acquia, they are using Drupal to compete with the corporate WCM solutions and multi-channel e-commerce in the medium/big companies.

Drupal is not trying to compete against WordPress. The mistake I have seen reading articles and talking with people is that they misunderstood the use and capabilities where Drupal can really compete.

Acquia understood it, and they has been able to penetrate in a market composed by standalone solutions with high total cost of ownership (TCO): Adobe, EMC, SharePoint…

They have been evolving through last are shown as leaders in the Gartner Magic quadrant.

wcm-2014

 

 

Hadoop architecture, data processing

In the process of understanding the basis of Hadoop I found a training course in AWS that is helping me to understand the concepts.

  • Map: it’s a process to map all distributed blocks of data and send the requested job to each one of them. The job sent to the block is a task with a reference to that block.
  • Reduce: it’s an iterative process where all data output from a task is sent back to summarize or join all the information into an unique place.
  • #Replication factor = 3. This means that every data block is replicated 3 times. With it, if a data block fails, another is available.
  • Jobs are divided in tasks that are sent to different blocks.
  • The block size by default is 64Mb.

Hadoop-Architecture-data-processing

Stand up desk

This is a “do it yourself” implementation of the professional stand up desks.

What do you need? just a simple shelf, a table, some screws and four small squads.

We have cut with the saw at the high where the elbows have around 90º, so it is as much ergonomic as possible. The table where we set the screen has the right high for looking the screen without issues for the neck.

This is the result:

Problems we faced and solutions we found:

  • As the shelf is not so good we had to add some squads to increase stability.
  • When we added the printer and some other heavy stuff, this increased the stability.
  • As we added the printer, we have added a small box so you can put you foot on it and modify the position a little bit. For instance my heels suffer when I am more than 30 minutes on the same position.
  • The shelf is very useful for storing the keyboard in case you want to use other laptop instead the computer.

Is there any improvement you think I should add?

 

Hadoop components

Hadoop platform is composed by different components and tools.

Hadoop HDFS: A distributed file system that partitions large files across multiple machines for high throughput access to data.

Hadoop YARN; A framework for job scheduling and cluster resource management.

Hadoop map reduce; A programming framework for distributed batch processing of large data sets distributed across multiple servers.

Hive ; A data warehouse system for Hadoop that facilitates data summation, ad-hoc queries, and the analysis of large data sets stored in Hadoop – compatible file systems. Hive provides a mechanism to project structure onto this data and query it using a SQL-like language called HiveQL. HiveQL programs are converted into MapReduce programs. Hive was initially developed by Facebook.

HBase; An open-source, distributed, column oriented store modeler created after google’s big table (that is property of Google). HBase is written in Java.

Pig; A high-level data-flow language (commonly called “Pig Latin”) for expressing MapReduce programs; it’s used for analyzing large HDFS distributed data sets. Pig was originally developed at Yahoo Research around 2006.

Mahout ; A scalable machine learning and data mining library.

Oozie; A workflow scheduler system to manage Hadoop jobs (MapReduce and Pig jobs). Oozie is implemented as a Java Web-Application that runs in a Java Servlet-Container.

Spark; It’s a cluster computing framework which purpose is to manage large scale of data in memory. Spark’s in-memory primitives provide performance up to 100 times faster for certain applications.

Zookeeper; It’s a distributed configuration service, synchronization service, and naming registry for large distributed systems.

Telco-Ref-Arch-2.0

Service Mesh, agility platform

I was reading and reading documents about how it works, the value it provides… and got a basic and blurred idea about the whole thing. So I decided to attend a training session related to Service Mesh, and this was a great decision because and now, after this sessions, that add a set of more than 50 real exercises, I can say that I understand how powerful is.

Just a reminder for me: CSC Service Mesh offers to all customers and users of the Agility Platform a set of Agility Platform Tools to create Agility Platform compatible images for a range of operating systems across public and private cloud providers.

servicemeshagility

ILOG Visualization Products

ILOG was founded in 1987 in France, their core business was enterprise software ILOG-logoproducts for supply chain, business rule management, visualization and optimization. Their best in class product was Business Rules Management Systems (BRMS).

In 1997 ILOG acquired CPLEX Optimization Inc.

In 2007 ILOG’s acquired LogicTools. This enabled ILOG to be the owner of a complete line of supply chain applications.

It was acquired by IBM 2009. They re-branded BRMS as IBM Operational Decision Manager (ODM). The CPLEX and other tools also had some minor re-branding under the IBM Optimization Suite of tools.

In 2014 IBM sold ILOG Visualization products (JViews and Elixir) to Rogue Wave Software, Inc.

The story will continue…

Provide discovery for free

IT consulting companies provide a wide portfolio of solutions where there are basically a discovery phase and an implementation phase. I’m referring to consulting areas, not to products that reproduce reports based on input information.

Some companies give it for free. The two main arguments are simple:

  1. If you understand in deep the problem your customer has, then you have a good % of the deal.
  2. In addition the customer knows its problem and s/he does not want to pay for something s/he already knows.

Nothing is free in this world; and you can say that a good salesman will sell this discovery phase for a good bunch of dollars simply because is able to convince the customer about the value of working with him.

Google Merchant Center

Google approach can be seen in so many angles, to me the example of the tactics used on flights booking services is a good example: first absorb the data, second provide it from Google.com removing the need of the third party.

Merchant Center follows the same road map. Today, you spend money advertising on ad words, specific directories, Facebook… Google is tracking the information from your store, and you can show it in Google Shop.