Mads Voigt Hingelberg, Author at Innovation Lab Stay relevant Fri, 12 Feb 2021 07:43:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://innovationlab.net/app/uploads/2018/05/cropped-favicon-01-32x32.png Mads Voigt Hingelberg, Author at Innovation Lab 32 32 171249639 AI on a budget https://innovationlab.net/blog/ai-on-a-budget/ Fri, 12 Feb 2021 06:57:10 +0000 https://innovationlab.net/?p=8604 How do you know if AI will be the solution to your problem? And are your data even ready for it?

The post AI on a budget appeared first on Innovation Lab.

]]>

I frequently hear from contacts in the digital business space that they urgently want to get started on AI, to get the first learnings. This is mainly pushed by C-level executives or fortunate individuals in positions of wealth and business acumen who drive their brand new intelligent cars and subscribe to Elon Musks’ vision of…everything.

 

Alas, we must recognize that the AI programming in a self-driving car has taken billions of gigabytes of data and compute-hours to develop and tune. Alone, the car’s AI’s maintenance is a tremendous task, requiring many people with various skills.

 

Self-driving cars have three benefits in terms of success for AI. 

  1. They generate a lot of high-quality data (to improve and train the next version of AI).
  2. The output result is just four controls (speed, brake, left, right).
  3. The input they consume is “just” video and has a closed domain purpose (in this case, navigating traffic).

Your business data, however, is an entirely different matter!

Some companies have a lot of data. Recently, I was working on a project for a team that had a dataset of 45 million data points. They explained they had a particular error, and they wanted to forecast when this error occurred. Anecdotally, this was supposed to happen every two weeks and with one full year of data (45M), we should be OK.

 

What happened was that there were issues every week, but not the kind they were looking for. What they were looking for was only happening five times over the one year of data we had, and we had to conclude that we would not be able to make anything meaningful in terms of AI to identify the error early on.

In this case, the outcome was the decision not to do an AI project. The analysis concluded that it would not benefit us to use AI to solve the problem. Instead, good old statistics came to the rescue, and we found an excellent solution to identify and classify the important from the less important errors. We handed over the algorithm, and the project ended.

 

In another scenario, AI will definitely be the right choice to solve a given problem, but it’s by no mean a given that it will work in any case. To identify this, we want to perform the pre-AI analysis of the data provided. The analysis will focus on the desired goals and what data, with what frequency, can affect the goal. In the car’s case, it’s easy, “just don’t crash.” In business data, identifying this goal is orders of magnitude more difficult.

 

How do I know if my data are ready for AI?

It can be challenging to tell but, generally, AI works well for finding patterns. The rule of thumb is that if a human cannot provide a relatively large set of examples to identify the situations we want to detect automatically with AI, it will be difficult.

 

Getting a human to classify, e.g., images or sounds is a well-known case of AI working well. However, detecting the historical factors leading up to a customer taking the leap and buying a particular product entails a whole new complexity level.

 

There are promising prospects concerning business data in quality control where the report showing “rejected products” will serve very well as a training set. It could also be in customer segmentation to know who the “bad” customers are, which can serve as a training set.

Just add the data, cook on full heat for a couple of hours on one of our supercomputers and get the result.

Using AI, there is also an option to identify the factors affecting the desired outcome. In other words, you can ask, “what makes a good customer?”. What made the best ten productions run the best? What parameters? Was it the employees, the materials, the sunshine, the afternoon cake in the canteen? Well, just add the data, cook on full heat for a couple of hours on one of our supercomputers and get the result. -It’s not that easy, but still, it’s not rocket science with the tools available.

 

Some time ago, we evaluated a project concerning microscopy imaging of crystalline structures in metal, but again, that’s images and thus on the “easy” end of the scale. That project broke on IT infrastructure. Someone needed to get the data from the microscope to the cloud and back; That can be expensive, but it’s another excellent learning from our past projects. If you decide to take the next step after an AI pre-analysis, you may need to invest in additional software, subscriptions for cloud services, and new processes.

 

In summary: Image, video streams, as used by cars, mobile phones, and YouTube, is the “easy” part, as a well-known set of requirements and methodologies apply. Corporate transactional data of various kinds are more challenging to work with, but the benefit is also equally high. Imagine being able to do something that may be difficult but will be out of your competitors’ reach for years to come.

 

If you are unsure and want to know if your data are ready for AI, give me a call. The process is quite simple and will require minimal effort on your part. At Innovation Lab, we will crunch the numbers, provide you with feedback on your readiness, and suggest a path forward. You’re also more than welcome to take a look at this whitepaper describing a product that can help you get started on your first AI project (in Danish).

The post AI on a budget appeared first on Innovation Lab.

]]>
8604
The EU commission white-paper on AI in 5-minutes https://innovationlab.net/blog/the-eu-commission-white-paper-on-ai-in-5-minutes/ Mon, 24 Feb 2020 10:11:15 +0000 https://innovationlab.net/?p=6349 Critical takeaways for the busy person.

The post The EU commission white-paper on AI in 5-minutes appeared first on Innovation Lab.

]]>

The commission wants to create a legal framework for AI – and who can blame them? It’s early days and pretty “cowboy” in terms of the procedural framework around AI.

 

The commission acknowledges that AI will have a significant impact on the European economy going forward, and is considered and supported as a growth area, specifically for SMB’s. It requires that we focus our efforts on training and education in the area as we currently have a shortage of AI competences.

 

In the legal framework, the EU is looking at the following:

 

1. Human agency and oversight

  • Decisions, e.g., medical diagnostics, should always be approved and validated by a human. The same goes for weapons, I presume.

 

2. Technical robustness and safety

  • Safety dependant applications like self-driving cars must be tested and certified by some EU authority – Like the NCAP tests for physical safety.

 

3. Privacy and data governance

  • The data sets used must have documentation related to the relevancy of usage. E.g. If you use images of faces, you must explain the extent of the purpose – a little like GDPR.

 

4. Transparency

  • AI models are inherently intransparent. Creators must provide documented proof of the workings, and in some cases, supply the training data set itself.

 

5. Diversity, non-discrimination and fairness

  • The results of AI predictions are only as good as the human trainers. Historically there have been cases of strong racial and gender bias, both in terms of facial analysis, but also in crime-related predictions. EU seeks to change this by regulation.

 

6. Societal and environmental wellbeing

  • The topic will not result in any legal framework. However, it’s on the agenda because AI calculations take up significant power-usage, and EU’s intentions are, that we should focus the development efforts to limit this by developing, not only towards the goal of better AI but also towards saving power in the process.

 

7. Accountability

  • Concerning the above, you will get penalized for non-compliance. If we extrapolate the current fines for GDPR, it will potentially be significant amounts of money in penalty.

 

My personal conclusion

I don’t like to be regulated by anyone. I do, however, recognize the need for regulatory efforts in any society for it to achieve its goals for the common good.

EU has proven once again to be the front-runner in taking a qualified look at the future ahead and taking the necessary steps to ensure the rights of the ordinary EU citizen. Generally, I think this will be mainly a good thing, but it will be challenging to provide transparency in the inherently intransparent workings of AI.

The post The EU commission white-paper on AI in 5-minutes appeared first on Innovation Lab.

]]>
6349
Quantum Computing Explained https://innovationlab.net/blog/quantum-computing-explained/ Tue, 30 Aug 2016 07:47:42 +0000 https://innovationlab.net/?p=1221 IBM Quantum Results Ready!

The post Quantum Computing Explained appeared first on Innovation Lab.

]]>

Jun 14th, 2016 was a pivotal day in my life.

It was the date when I executed my first program on a quantum computing server. This machine is very cold – 0.018 Kelvin (that’s 2,7 degrees celsius colder that the universe). Inside the enigmatic machine, there are five so-called q-bits. Consider them as small balls, which can either be black, white or gray. On top of this, IBM has created a language that can manipulate the balls, turning them into small elements, which can be programmed to perform certain actions.

That all sounds very regular, but consider this: the balls can be black, white, or black and white at the same time (let’s call that state for gray). They are the size of exactly one atom – and can affect each other through something called quantum entanglement, meaning they do not need to touch each other to change their states. In fact, in order to do do what they do, they need to exist in multiple parallel universes at the same time.

Dices quantum computing explained innovation lab

Practically, we look at a given problem from a solution perspective. Let’s say we have two dice, each with six sides, and we want to calculate the different combinations of the two, that give the number 5. Intuitively, we know it must be 1+4 and 2+3, but a classic computer program would have to run through each combination, evaluate the result, and only output the result if the success criteria (sum = 5) is met. That is at least 2 steps.

A quantum computer only needs one step. We start by identifying the result 5, and the quantum computer will then – through the weird logical operators – return all the input values that meet the success criteria in one go.

So is it magic? No. Is it exotic and awesome? Hell, ya’!

quantum computer code

The IBM quantum computer has created a visual programming language called “scores”. It follows the analogies of musical scores on a note sheet or strings on a guitar. Each “note” is an operation, like add, subtract, and entangle, and once your score is completed, you can “play” it inside the quantum computer. The result is a “chord”, like on a guitar, and the digital representations of the chord are the results.

In this example, I have three q-bits (or three guitar strings) that can either be 0 or 1. Initially, they are all 0 [0,0,0]. The green boxes with the “X” turn them into [1,1,1]. Then, I use the turquoise operator with the “+” sign to change the value of the third bit to be 0 (a so-called NOT operation). If I did the same again, it would be 1, as NOT 0 = 1, and NOT 1 = 0. Finally, a measurement of the three q-bits return the results from interdimensional space to the real world and provides the output [1,1,0] – as expected.

quantum computing result

“What’s all the fuzz about”, you may ask, “you’ve just added two numbers, my PC can do that too?” – and yes, that is correct. But imagine a more sophisticated quantum computer. One with 256 qbits. Such a device is a just few years around the corner.

Remember the example with the dice. A quantum computer could calculate the two combinations that had a sum of 5 in just one calculation, where a regular computer needed at least 2 tries.

A 256-bit number is rather large: 2^256 = 115.792.089.237.316.195.423.570.985.008.687.907.853.269.984.665.640.564.039.457.584.007.913.129.639.936. Or, in other words, the estimated number of atoms in the entire universe. A normal computer would need 1 trillion billion lifespans of the universe to complete a fraction of this calculation. A quantum computer would need less than 1/100 of a second. Thus, it would be able to crack SSL, the internet communication cryptography standard for secure transmissions. Effectively, anyone with a 256 qbit quantum computer would be able to read all your mails, have a peek at you account balance in your bank, and log in to your Facebook account.

So, progress moves forward. New cryptography standards emerge, and we will in time be able to counter the threat. However, we will also gain access to unlimited computing power, giving us a giant leap in everything from cancer research and space exploration to artificial intelligence. Here, at Innovation Lab, we are keeping a close eye on this technology, as we are sure it will shape the next 10 years of the future.

Want to try? Go ahead! Dig into quantum computing yourself at IBM Quantum Computing, and have fun. Questions? send me a mail at: bigdata@ilab.dk. And if you are interested in more explanations of complex concepts, take a look at my blog post about the difference between big data and small data.

Regs,

Mads Voigt Hingelberg

The post Quantum Computing Explained appeared first on Innovation Lab.

]]>
1221
Big Data vs. Small Data https://innovationlab.net/blog/big-data-vs-small-data/ Thu, 11 Aug 2016 07:59:35 +0000 https://innovationlab.net/?p=1230 Big Data. Small Data. What is really the difference?

The post Big Data vs. Small Data appeared first on Innovation Lab.

]]>

This is the age of Big Data. It surrounds us, like the clouds in the skies, seeming to be a solid mass. Yet, it is nothing but a haze, when we look inside from an airplane on our way home from vacation.
It is not tangible or clearly defined. However, at Innovation Lab, we have found a statement that tries to epitomize the concept. Big Data is the difference between what we want to do with data, and what we can do with data.

This is an age-old problem. Since the dawn of day, people have struggled with the compiling and structuring of information – and to turn that into decisions regarding future business strategies. In fact, this was why International Business Machines came to build computers. Originally, IBM produced typewriters, but a lot of the information typed in by staff in e.g. banks was compiled in archives, from where searches would take ages to perform. That was Big Data back then. In other words, the requirement to find files quickly, to provide information for decision making, and the inability to do so, made them create the computer, which we use today. In fact, the reason why files on a computer are called files, was because originally, it was physical files. The digital files were stored in a database. A file storage, from where the files could be instantly extracted and written to printers on demand.

Today, the usage of data to drive decision-making is a must-have for bigger businesses. However, these still struggle with data. Furthermore, as the amount of data increases exponentially, our ability to interact with data does not follow.

Youtube can show videos, but cannot decode the content, narrative, and meaning of a guy eating chili or a girl doing makeup. Images post a big problem too. Even Google and Facebook, for all their clear minds and unlimited resources, cannot figure out how to make real sense of an image…and the list goes on…

Small data is equally puzzling for decision makers. Usually, small data is a product of a small business or a business that is not traditionally data driven. We don’t see many mechanics do analysis on the number of bolts and joints, used for different vehicle types, over a year in order to optimize the stock of spare parts. Also, a flower shop owner will have a reasonable sense of season, flower species, and quantities to acquire from their supplier, but deep analysis of the exact optimal mix over time to optimize revenue is an uncommon practice. However, the challenge is the same. But the difference lies in the structuring, collection, and analysis, whereas the big businesses challenge lies in the complexity of data.

In Denmark, as in most – if not all – countries, the major part of the GDP is provided by small and medium businesses. Therefore, an added benefit to the GDP of an increased use of data for decision-making must come from that particular segment of businesses. Alas, enter the toolbox, without the tools.

SMBs do not have the tools and competencies needed to work with data. Business analysis is for professionals, like yours truly. It is expensive and not easily accessible. That person, who develops a tool to ease the SMBs in their use of data, and ease their decision-making, without having a statistical or financial background, will make a lot of money. However, we still need to see someone take up the challenge.

In Big Data, the data is too complex to analyze, are too large to understand, or is moving too fast to make sense of. With Small data, the collecting and analysis of data are the main problems. Both problem statements require solutions, and both are equally difficult.

If you want to know more about Big and Small Data, reach out to me at bigdata@ilab.dk, and let’s have a talk! I have also defined Quantum Computing, if you are looking for even more practical takes on complex concepts.

The post Big Data vs. Small Data appeared first on Innovation Lab.

]]>
1230