Deep learning

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (paper summary)

Posted on December 7, 2021

Deep learning models are usually regarded as black boxes. That is because they are not transparent about the way they reach the prediction. Humans cannot directly interpret the model with millions of parameters. Choosing ignorance can lead to unforeseen dangers. This is inherently a bad practice that should be minimized as much as possible. Current deep learning explainability tools aim to simplify the process to the outcome but does not really explain the “thinking process” that the model followed. ThisRead More

Hidden technical debt in machine learning systems (paper summary)

Posted on November 30, 2021

Machine learning systems are wonderful. Many shapes and forms of machine learning algorithms are currently in use. Different models such as clustering like k-means, prediction methods like trees, or more advanced deep learning methods suffer from technical debt. In traditional software engineering, technical debt can be found in specific shapes. In addition to the “traditional” software engineering problems, machine learning systems also face new challenges. The following paragraphs present the different technical debt found in machine learning systems. 1. EncapsulationRead More

The new software: Advantages and disadvantages

Posted on September 29, 2020

Recently I published a post about the new software paradigm. The new paradigm of software is the one that the coder does not directly program each of the cases for any of the given inputs. The new paradigm uses training data to let the computer learn the outputs for each input. The computer programs itself while the coder and the software developer’s job is to prepare the data and set the target. The new paradigm seems to bring new challengesRead More

The new software: Less coding more data

Posted on September 1, 2020

Software like everything is evolving but it is evolving differently than I thought. When I was studying computer science at the university I thought that the future was parallelism. We were taught only one class in parallel programming. Multi-core computers were on the rise and it seemed to be the thing to learn. Since then my opinion has changed. There is indeed the need for parallel programmers but it is not as big as I had foreseen. Most of theRead More

Calibration for deep learning models

Posted on June 20, 2020

Wikipedia’s definition for calibration is calibration is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. Put in a context that means that the distribution of predicted probabilities is similar to the distribution observed probabilities in training data. If we rephrase it again means that if your model is predicting cat vs dog and the model states that a given image is a cat with 70% probability then theRead More