Interview with Andrés Torrubia: “Deep learning is the single reason AI is in the news today”

Andrés Torrubia, co-founder and CEO of Fixr.

We are glad to interview Andrés Torrubia today, a successful tech entrepreneur with a strong background in artificial intelligence. What sets Andrés apart is his unique role here in the Spanish tech ecosystem, exposing us to AI in unique ways.

As a developer, he is a very active member of the international AI ecosystem and brings that experience back to us. “Machine learning communities are vibrant and welcoming. Teaming up with others is a norm rather than the exception“, he says below.

And as you’ll see as you read the interview, Andrés can help us understand how the world will look like, when ruled by Chinese and US AI superpowers. “We are letting core infrastructure (finance, telecommunications, news, etc.) in the hands of foreign players“, he argues.

Andrés (@antor, 44) likes to say that he has the luxury of living in near the beach in Alicante (Spain) while working in the US through FIXR.COM. He has participated in multiple deep learning competitions, most notably he finished #1 in a Self-driving car LIDAR point cloud segmentation challenge organized by Alibaba in China competing against 1400+ teams.

In this interview, we mostly wanted to ask him about how AI might impact our lives, and how to be prepared for it.

Andres torrubia
Andrés Torrubia and Colin Powell.

We’ve all been hearing about AI for a long time, but something has changed in the last few years. How AI is different now than it was 10 years ago? What has changed?

Andrés Torrubia: Deep learning is the single reason AI is in the news today. Thanks to deep learning, AI has become effective, useful and more accesible.

While most of the theory and methods that support supervised deep learning are more than a decade old, the surge in availability of datasets and computing power has allowed deep learning to achieve results that many in the industry thought would take much longer.

Some countries with very different political systems to ours are investing heavily on AI, much more than Europe. This might bring some challenges to Europeans. How do you see Europe in a world where others dominate AI? What consequences do you think this could have?

Andrés Torrubia: AI is a general purpose technology that will be used across many industries and facets of society, even those that have been relatively immune to information technology disruption.

Unfortunately in Europe -at least in technology- we have a history in investing in core research but fail to leverage those investments commercially. The web was invented in Europe (CERN), yet the only originally European web browser is almost history (Opera); the MP3 audio compression format was invented in Germany, yet multimedia codec licensing is a US operation (MPEG LA), etc. The more you look at the value chain from fundamental research to consumer commercialization, the more you see US (and recently Chinese) companies making inroads in our daily lives; with local (European) players getting a smaller piece of the action.

In Europe -at least in technology- we have a history in investing in core research but fail to leverage those investments commercially.

This trend has already happened in the information economy and AI will accelerate it even more due to its self-reinforcement cycle: good AI companies gather more data, with more data they will build better products, which they will commercialize and reap more profits, which they will use to hire the best talent which in turn adds a positive loop in the cycle.

In the same way that it is almost impossible to compete with Google in the search engine business, it will be very difficult if not impossible to compete with Google, Amazon or Apple in voice recognition due the massive advantage they have in data (each time we use their system we are helping them make them better), resources and talent to build AI voice recognition products.

In contrast to Europe, China has used a combination of:

1) asymmetry (making it difficult or impossible for foreign tech companies to compete in the Chinese market while importing technology and know-how to China from those countries) and,

2) a sharp multi-level focus to build its AI ecosystem: education, incentives to bring talent working abroad back to China, loose regulation, data sharing, financial incentives, etc.

I believe the consequences of not having a strong AI industry are two-fold:

1) Economic influence: in a similar way that a bigger share of the information economy is being played by US companies, AI will make it feasible for these companies to cross the line from the information economy to the physical economy. For example: the last mile of (e.g. Amazon) package delivery is done by a local courier, soon it could be done by autonomous drones powered by AI.

As AI becomes infrastructure, we are letting core infrastructure (finance, telecommunications, news, etc.) in the hands of foreign players.

2) Trust: as AI becomes infrastructure, we are letting core infrastructure (finance, telecommunications, news, etc.) in the hands of foreign players, which is ok as long as they are friendly but even if they are there is an evident risk of misalignment of values. The US sees healthcare and education differently than us in Europe, and China has a different view altogether. This will become an issue if Chinese censorship starts happening outside China as they gain more influence globally.

How can we influence EU politicians?

European regulation intends to defend citizens. Do you think all this regulation is a disadvantage for EU tech companies, compared to their US and Asian counterparts?

Andrés Torrubia: I think it is a bit more complex. In the US antitrust laws are intended to protect consumers, not competitors. European regulation may be well intended but in my opinion badly executed, for two reasons:

  1. Early regulation can severely hamper success of emerging companies, especially startups. In the US and China (at least in technology) the tendency has been to leave markets loosely regulated to allow small companies and entrepreneurs to focus on product market fit without spending precious resources in legalese and compliance. Once those small start-ups get bigger (think Google, Amazon, etc.) they welcome regulation because they can afford to spend on compliance and they know new contenders cannot.
  2. Execution. We are in a sad state of affairs when technology companies like Google pay more in fines to the European Commission than in taxes, when the user experience to browse the web is utterly destroyed with the pretext of cookie disclaimers, information sharing (which could have been done at the browser level), when focus is placed on privacy and not human manipulation at scale, etc.

How do you think we could influence our politicians to make better technology decisions?

Andrés Torrubia: We need a much stronger technology industry in Europe. If you look at the European stock exchanges and compare them to the US and Asia, Europe top stocks look more similar to those of the 80s than in the US (we do not have Googles, Facebooks, Amazons, Netflix, etc.).

Some would say that technology and AI by extension can bring a lot of benefits to society: highly-paid jobs, money in the form of tax receipts, and progress. While this is true I think it is short-sighted. The challenges we are going to face in the next decades are immense: an ecology crisis positively reinforced by more energy demand, growing and aging population, disruptive job displacement, wealth inequality and international tensions.

I see technology and AI as a matter of survival. Same as when countries used to mobilize all of their precious resources when at war

I see technology and AI as a matter of survival. In the same way that countries used to mobilize all of their precious resources when at war, we should acknowledge that the world is entering uncharted territory and the two solutions to go by are either to look back to the past (and give up many of the rights and benefits we’ve worked hard to earn, like healthcare, education, freedom. etc.) or invest in science to help us conquer the next stage of human welfare.

No political discourse that I know of has technology, science and the scientific method as its main toolbox. We owe our world to fundamental discoveries in physics which supported by math gave birth to our technological society with all of it benefits. I think it’s time to move from nineteen century politics and devise a social system that while it may look utopian today could be achieved thanks to science and technology.

You are the co-founder and CEO of and, and you use AI to solve a number of problems there. Could you give us an estimate of the % of revenue that you attribute to using AI? I know it’s not an easy question:) Do you think your competitors not using AI are less efficient in their business model?

Andrés Torrubia: Our case is very specific, what I can say is that the majority of our growth is due to AI.

You participate in many AI international competitions. Can you please describe the process? The first steps and how it evolves. Meaning: you start DiDi’s challenge competing alone against thousands of other engineers, then you partner with a Russian nuclear physicist… For a beginner, all that process is kind of frightening. Tell us about it please.

Andrés Torrubia: Get your hands dirty.

No matter how much you read, nothing compares to building your code from scratch. The first step is to download the dataset and build an intimate relationship with the data. I like to do some exploratory data analysis (EDA) rather than treating data as a black box. The process looks a lot more like interactive programming in BASIC from the 80s rather than regular, boring, boilerplate software engineering in Java. Many machine learning practitioners use Python inside Jupyter notebooks which allows to extremely fast experimentation and immediacy.

In competitions you do not have time to create completely new deep learning architectures, so instead you should get familiar with the data and focus your strategy on how to use your limited resources to train your system in the most efficient way. More often than not, labelled data contain errors so part of your strategy should be how to deal with them: use them anyway, fix them, discard them, etc.

Machine learning communities are vibrant and welcoming. Teaming up with others is a norm rather than the exception as sharing tricks, techniques and tools with people from diverse backgrounds is incredibly enriching. It may look daunting, but it is not.

andres torrubia
Andrés Torrubia at NASA.

Machines can read images, more and more. Be bold: what is the biggest change this will bring?

Andrés Torrubia: Computational perception has been in my opinion one of the Achilles heels of the IT industry. We do not realize but many common actions we perform daily are the result of us adapting our natural behavior to make it easier for computers to understand us.

We have to drive cars manually not because computers cannot control cars but because computers do not understand the exterior environment of a car (obstacles, pedestrians, etc.).

When we go to a supermarket we have to be in line for a while and go though the cash registry where they scan every item we bring. We do so because the computer cannot determine what we’ve taken without us almost spelling it for them.

Those are just two perception applications, but huge ones. From a product management perspective the way I think of perception AI is: how would the user experience look like if they had a person with them translating what they see and ear in real time to a computer? AI makes this potentially doable.

The convenience that this will bring will be great, the danger again is not privacy, but the potential use of this perceptual data to perform individual manipulation at scale. Personally, I am totally fine walking into a store and have my face recognized as I take a bunch of snacks and have the supermarket charge them to my account directly without going to the hassle of waiting in line through the cashier, etc; but not at the expense of the supermarket leveraging this data for other purposes or worse still selling it to other parties to build a super detailed profile of me that would eventually be so precise that they would know which ads would trigger my desire to buy a certain product or even worse coerce me into doing something (without me even realizing it).

andres torrubia

Cities are big generators of data. Apart from autonomous vehicles, what’s going to change?

Andrés Torrubia: I wish that city management (and by extension city politics) was more driven where data is available, so the first step is to collect and make city data transparent. I am not a smart city expert, but I think governments are pretty bad at picking winners, so my thinking would be to make the data available, identify the problems first and then devise solutions. As an engineer / entrepreneur sometimes is tempting to fall in love with the solution to a problem, but it should be the other way around: you have to fall in love with the problem, and then find the best solution.

An engineer wants to learn AI. Where should she start?

Andrés Torrubia: Learn by doing. I really recommend taking courses and then participate in small projects or competitions. If not an engineer, take the AI for everyone by Andrew Ng.

Last question: you are being interviewed by Chicisimo – you know we’ve built the Fashion Taste API to help fashion retailers understand the taste of each individual shopper. We think AI and data are going to transform fashion. And you? How do you think that AI will change fashion?

Oh! If you saw my closet you would understand why I am not even qualified to answer your question and if you captured my data I would be an outlier in your dataset ;-).

Hope you enjoyed the interview. There are some many gems in it, it’s difficult to decide what’s the best part . Thanks for reading!

You might also like:

Taste graphs will transform fashion – Fashion Taste API

taste graph fashion
How do people describe their clothes and outfits? What clothes do they have in their closets?

There is plenty of data in the manufacturing and distribution of clothes. But once clothes are sold and people are wearing them, there is nothing. No data. Nada.

In the following lines, I will share the following:

  • Taste graphs will transform fashion;
  • They will focus on understanding post-purchase clothing behaviour;
  • They will allow tech companies to understand taste, as Spotify does with music;
  • They will end up owning people’s attention, because they will be useful.

Analyzing demand for outfits

The learnings below are based on 4 years analyzing the demand of outfit ideas, and then building the omnichannel personalization fashion retail engine Fashion Taste API, to help fashion retailers understand the taste of each individual shopper.

We learnt that two important questions were: How do people describe their clothes, outfits and what-to-wear needs? What clothes do they have in their closets and how do they wear them? In order to respond to these questions, we built a fashion ontology and a taste graph, and all the infrastructure to automate outfit advice.

An outfit is a playlist of clothes. It is also a correlated list of descriptors: it can be comfy, or perfect for the weekend. An outfit contains correlations among clothes, and the deep meaning that a person assigns to her clothing preferences. Outfits provide a unique perspective into closets.

An outfit is a playlist of clothes

Taste graphs bring a unique opportunity to own the fashion space

The biggest opportunity in fashion technology today is to build a mechanism to understand post-purchase clothing behaviour. And then, build technology on top of that understanding: technology to help people feel better with their clothes.

Traditional tech efforts focus on efficiently selling more clothes to people, and ignore the post-purchase experience. Once the purchase is finished, companies are blind and can’t see what happens next.

Offering a post-purchase experience that helps people feel well with their clothes, will let the winner own people’s attention, and so many more things as a result.

Taste graphs will power a Spotify for fashion

A Spotify for fashion will understand your taste and needs, because it’ll be powered by a taste graph.

It will help you decide what to wear at any time. You’ll be able to easily store your clothes in a virtual closet, and it will put outfits together for you. It will help you plan your outfits depending on your context, and will suggest new clothes that match your wardrobe.

Helping people feel well with their clothes will be the key functionality of such a service. People want to feel well with their outfits. They want to feel confident, comfortable, happy, beautiful, unique, sexy, stylish, powerful. Instead of that, many people feel stressed or bored or tiny. More than about clothes, it’s about wellness.

1.- Capture units of taste data

Before we try to understand taste, we need to understand what type of data we need to focus on. Spotify focuses mostly on playcounts (each time you listen to a song), and a playcount clearly defines your current behaviour.

We have learnt that the units of capturable taste data are related to text and images. Words express a need (“i need ideas to go to the office”). Images of clothes represent the clothes people own, and need help with. There are other units of capturable taste data, but it comes down to text and images. Then, in our mobile app we’ve built different easy-to-use input interfaces to capture data and allow people to communicate with the system. You can also see our In-Bedroom Fashion Stylist and our Digital Closet technology.

2.- An Ontology to understand fashion data

But fashion has a problem: it lacks a common classification system. The expression of clothing behaviour is very fragmented: text and images have different meanings for each person, and each person expresses the same concept differently. Due to the lack of this classification (or taxonomy), people’s data is noisy and algorithms cannot work with it. To solve this problem, we’ve built a fashion ontology, which is the backbone of our taste graph.

Our ontology gives structure and meaning to the incoming data. It allows us to interpret data. It is a multilevel “list” of hundreds of thousands of unique ways to describe what-to-wear needs. Think of Netflix initial classification system or Google’s synonym matching.

A derivative of our main ontology is our ontology of meta-garments, abstractions of specific garments. These meta-garments are the result of another learning: only certain attributes of a garment are relevant when solving the problem at hand. This ontology is 100% user-driven, it’s been built from the bottom up, and it is the result of the need to help people with their outfit needs.

3.- Taste graphs to understand fashion taste

When we get dressed in the mornings, we establish correlations among clothes, and among our ways of describing our outfits and needs. Look at yourself today: you are wearing a playlist of correlated descriptors and clothes. You’ve built an offline taste graph.

Taste graphs capture those correlations among descriptors, outfits and people. Think of it as a brain that understands “what goes well with” any garment, or for any occasion, etc. It has this understanding because it analyzes hundreds of millions of correlations, outfits and queries. Then, it filters them to your specific characteristics and context.

Our taste graph allows us to respond with output to any input. We call it the Social Fashion Graph and we patented it back in 2012. You might think that the image below is super simple, but that’s how simple we want the system to be: receive input > produce output.

Taste graphs simplify complexity

The end game

Taste graphs will provide structured and correlated taste data. And then will allow teams to build personalized and meaningful services for each person. Our closets will be taste graphs connected to ecommerces catalogues (also graphs), and everything will change. Taste graphs will transform fashion. Head over to the Fashion Taste API to learn more.

Thanks for reading! ?

Fashion technology: how to understand fashion taste

The largest opportunity is fashion technology today is to focus on helping people feel better with their clothes. To do so, we need to understand fashion taste. Once we achieve that, we will be able to build personalized experiences on top of people’s unique taste.

But understanding fashion taste and clothing behaviour at scale is a difficult challenge. That’s our focus at the Fashion Taste API. We are building the infrastructure to automate outfit advice. This infrastructure includes four assets:

1.- A fashion app that has virtual closet. The app helps women as an outfit planning app and Apple features regularly features as App of the Day world-wide. The app helps us learn outfit and clothes behaviour;

2.- A taste graph that is in charge of structuring fashion data, and assigning the right descriptors to outfits and to people. In fashion technology, obtaining clean data is not easy because of the lack of a fashion taxonomy. We’ve built a fashion ontology, and we’ve embedded it into the taste graph that helps us solve this problem. The Social Fashion Graph, that’s how we call our taste graph;

3.- A data portal. This portal provides transparency to the team, and ease of access to data;

4.- Fashion technology patents. These patents protect innovations in three fields: image-based shopping, outfit and closet data, and outfits search.

Are you interested in fashion technology? Please read more about the Fashion Taste API.

Helping women plan their outfits, by using a vertical machine learning approach

About a year ago, we shared our approach and our vision on how we’ve applied machine learning to helping women plan their outfits and use a virtual closet. We didn’t have a blog back then, so we shared our thoughts on Medium. Now, we want to link to it from here.

In that post, we focused on the following aspects:

What is Omnichannel Personalization?

Please visit the Medium post How we grew from 0 to 4 million women on our fashion app, with a vertical machine learning approach or read the Hacker News discussion. You can also read about our taste graph here.

Further reading: