How statistics are calculated
We count how many offers each candidate received and for what salary. For example, if a AI and Machine Learning developer with a salary of $4,500 received 10 offers, then we would count him 10 times. If there were no offers, then he would not get into the statistics either.
The graph column is the total number of offers. This is not the number of vacancies, but an indicator of the level of demand. The more offers there are, the more companies try to hire such a specialist. 5k+ includes candidates with salaries >= $5,000 and < $5,500.
Median Salary Expectation – the weighted average of the market offer in the selected specialization, that is, the most frequent job offers for the selected specialization received by candidates. We do not count accepted or rejected offers.
Trending AI and Machine Learning tech & tools in 2024
AI and Machine Learning
Artificial intelligence (AI) and machine learning (ML) are two related trending technologies that you’ve likely heard alongside other buzzwords like big data and predictive analytics. Most people don’t even distinguish between AI and ML because they relate to each other in so many ways. Big data, predictive analytics and digital transformation are all related to AI and ML, but the latter and the former turn out to be very different. Here’s an overview of the main differences between artificial intelligence and machine learning. Over time, there has been a growing number of AI and ML products on the market as firms have used these programs to analyse enormous amounts of data, make better decisions, provide recommendations and insights in real time, and create forecasts and predictions with accuracy. That is to say, what is the difference between ML and AI, how are ML and AI related, and what do these terms mean in the real world when organisations talk about them? And today we take a detailed dive. As such, let’s start with AI vs ML, and uncover how these 2 new concepts are intertwined and what’s the ultimate difference.
What is artificial intelligence?
Artificial intelligence (AI) is an umbrella term for the use of technologies to build machines and computers that carry out tasks similar to those performed by humans: seeing, hearing, reading, answering questions, talking, translating, advising, making decisions, and so on. And, while artificial intelligence is properly described as a technology, when we think of intelligence, we are usually picturing the act – that is, the ‘intelligence’ – as belonging to the entity whose behaviour is guided by it, not to the system itself. In other words, artificial intelligence is actually a set of technologies implemented in a system that enables it to reason, learn and act in order to solve a problem.
What is machine learning?
Machine learning is a subfield of artificial intelligence that allows machines to learn and build upon their skills and experiences without explicit programming. Machine learning (ML) leverages algorithms and vast amounts of data to provide insights to a machine or system, from which it can then automatically determine a course of action. A machine learning algorithm will get better over time, the more it is trained (for example, the more it is exposed to data). The result of running an algorithm on training data is called a machine learning model; the more data you put into the model, the better your model is going to be.
AI Models and Machine Learning
An AI model can be used to automate a decision-making process. But only those that use machine learning (ML) can iteratively optimize their performance without human intervention.
However, all ML models are AI, but not all AI is ML. The simplest AI models are a set of if-then-else rules. The rules are explicitly programmed by a data scientist. Such models are called rules engines, expert systems, or knowledge graphs. They can also be called symbolic AI.
Machine learning (ML) models use statistical AI. While rule-based artificial intelligence (AI) models need to be explicitly programmed, ML models are ‘trained’ to find patterns in a dataset, applying their mathematical formulations so many times using a set of training data – data points identified as training samples for the model to use to prepare it for real-world prediction.
ML model techniques can be divided, at a high level, into three classes: supervised learning, unsupervised learning, and reinforcement learning.
Supervised Learning
Also known as ‘classic’ machine learning, supervised learning means being taught by someone with expert knowledge of the training data. A data scientist teaching a system to recognize dogs and cats (a task I’ll return to later) has to advise the training AI which sample images are ‘dog’ or ‘cat’, and what are the key features of the dog-ness or cat-ness of those examples that lead his human adviser to assign those labels (e.g., being big, furry, four-legged – maybe?). The AI can then, as part of its training process, work out what the general pattern of visual features is that we could call ‘dog-ness’ and ‘cat-ness’.
Unsupervised Learning
Unlike supervised learning techniques, unsupervised learning doesn’t presume that there are ‘right’ or ‘wrong’ answers externally. It doesn’t therefore have to label things. Instead, these algorithms recognize innate patterns in the data, to group data points together into clusters and inform prediction. For instance, e-commerce companies such as Amazon use unsupervised association models to power recommendation engines.
Reinforcement Learning
In reinforcement learning, an agent learns implicitly end-to-end by trial and error (trial and trial); i.e., via the mechanical rewarding of correct output (or punishing of false output). Reinforcement models inform social media based on your accounts, algorithmic stock trading, and even autonomous vehicles.
The most advanced form of unsupervised learning is known as deep learning – a form of machine learning in which the architecture of neural networks tries to replicate that of the human brain. Information is forward passed through layers of nodes, where each layer is interconnected with all the nodes of the previous layer. Along this progression, data is fed into a system and passed through neural nodes, where key features are extracted from the raw data, relationships are detected, and decisions refined. This process is known as forward propagation. After a model is predicted, an error calculation procedure called backpropagation is used to evaluate the system based on the forward pass. Basically, backpropagation represents the process of changing the weights and biases inside the neural network to minimize the error between the predicted model and real specifications. This two-step process is performed over and over again, allowing the system to improve its predictions through iterations. Most state-of-the-art AI applications today, such as the ‘large language models’ (LLMs) behind most modern chatbots, utilize deep learning. Deep learning, arguably more than any mode of machine learning, is hugely complex and requires massive computational resources.
Generative Models vs. Discriminative Models
We can characterize machine learning models by their basic approach: most are either generative or discriminative. The difference between the two approaches to modeling relates to the space occupied by the data.
Generative Models
Generative models, which are typically an example of unsupervised learning, capture the distribution of the data points, and attempt to predict the joint probability P(x,y) that a certain datapoint in the space occurs. A generative computer vision model might learn correlations such as ‘Things that look like cars are likely to have four wheels,’ or ‘Eyes are unlikely to be found above eye-brows.’
These can be used to generate outputs that the model considers very probable; for instance, a generative model trained on text data can provide the spelling and autocomplete suggestions; at the most sophisticated level of design, it can produce entirely new text. That is: when an LLM produces text, it has computed a high probability that that sequence of words will be assembled in response to the prompt it has been given.
Other common applications for generative models include image generation, music creation, style transfer, and language translation.
Examples of Generative Models |
---|
Diffusion Models: Diffusion models iteratively increase Gaussian noise on training data until it is illegible, then train a version of the process backwards to ‘denoise’ inputs (usually images) from a random seed. |
Variational Autoencoders (VAEs): VAEs have an encoder to compress their input and a decoder that learns to invert the mapping between likely data distributions and their representations. |
Generative Pretrained Transformer: These ‘transformer’ models exploit mathematical tricks called ‘attention’ or ‘self-attention’ to identify how elements in a sequence of data impact upon each other, the ‘GPT’ in OpenAI’s Chat-GPT standing for ‘Generative Pretrained Transformer’. |
Discriminative Models
These generally entail supervised learning that works by modeling the decision boundaries between classes of data (or ‘decision boundaries’), usually with the goal of predicting the conditional probability P(y|x) that a specific point of data (x) will fall into class (y). A discriminative computer vision may learn to distinguish between whether something is ‘car’ or ‘not car’ by pinpointing a handful of distinctions (‘if it doesn’t have wheels, it’s not a car’), and thus can ignore many of the correlations that a generative model must account for. Discriminative models are therefore often easier to train.
Not surprisingly, discriminative models are well suited to classification problems such as sentiment analysis – but there are other applications too: decision tree and random forest models work by breaking a more complex decision into a series of nodes, each with a potential classification decision (a ‘leaf’) towards one class or another.
Use Cases
But while one may superiorly perform to the other for some real-world use cases, many tasks can be done equally well with each. For instance, discriminative models have a wide range of applications, including in natural language processing (NLP), and in many NLP tasks are superior to generative AI (e.g., machine translation, which is often more effective when performed via a discriminative model rather than using generative AI to construct the translated text).
Likewise, for classification, generative models can use Bayes’ theorem to make predictions. Instead of determining which side of some decision boundary an instance lies on (as a discriminative model would), a generative model could calculate the probability each class would generate an instance and pick whichever has the higher probability.
In fact, many AI systems operate in tandem with both techniques. For instance, in a generative adversarial network, a generative model produces the sample data, while a discriminative model checks if that data seems ‘real’ or ‘fake’. The output of the discriminative model is fed back to the generative model as training signals to refine the pattern that it generates until the discriminator can no longer tell ‘fake’ generated data from ‘real’ data.
Classification Models vs. Regression Models
A second dimension along which to sort models is according to task: most of the classic AI model algorithms are either classification algorithms or regression algorithms, some are suited to either (or both), and most foundation models leverage both types of functions.
Such terminology is not always clear-cut: for instance, logistic regression is a discriminative model for classification.
Regression Models
Regression models involve continuous values for predictions (price, age, size or time). They model the relationship between one or more independent variables (x) and a dependent variable (y): given x, predict the value of y.
- Algorithms such as linear regression – and variants, like quantile regression – are useful for forecasting, pricing elasticity, and credit risk analysis.
- It can, for example, learn complex non-linear relationships between variables with algorithms such as polynomial regression or support vector regression (SVR).
- Some generative models, such as autoregression and variational autoencoders, can account for all the relationships, including those that are causal, between past and future values. This makes them especially well-suited for predicting increasingly extreme weather events and scenarios on our planet.
Classification Models
Classification models predict classes. Therefore, classification models are often used when we want to assign a class – either in a binary (yes or no, accept or reject) fashion, or with multiple classes (e.g., a recommendation engine that might suggest Product A, B, C or D).
They are applicable to anything from simple categorization to automatic feature extraction in deep learning, as well as to new diagnostic image classification techniques in radiology and beyond.
Common Examples of Classification Models |
---|
Naïve Bayes: A generative supervised learning algorithm used in spam filtering and document classification. |
Linear Discriminant Analysis: Used to resolve contradictory overlap between multiple features that impact classification. |
Logistic Regression: Predicts continuous probabilities that are then used as proxy for classification ranges. |
Training AI Models
In effect, this ‘learning’ is done by training models on sample datasets, and any probabilistic trends and correlations gleaned from those sample datasets get applied to outputting the function of the system.
For supervised and semi-supervised learning, this training data needs to be labeled carefully by the data scientist to get the best results. With the right feature-set extraction, supervised learning using a sprinkling of training data will get you more accurate results than unsupervised learning, which needs larger amounts of training data overall.
Ideally, ML models are ‘trained’ (known more broadly as supervised learning) on real-world data because, intuitively, this will best ensure that the model reflects aspects of the real-world environment that it is intended to analyze or imitate. However, training on real-world data is not always possible, feasible or optimal.
Increasing Model Size and Complexity
The more parameters for the model, the more data needed to train it. As deep learning models get larger and larger, data to train them becomes more difficult to get. We see this with LLMs: OpenAI’s GPT-3 and the open-source BLOOM each have more than 175 billion parameters.
So, despite its ease, using open data raises regulatory questions about what needs anonymizing, and practical ones, such as whether a language model trained on a social media thread might ‘learn’ bad habits or inaccuracies that aren’t optimal for formal enterprise use.
A way around this, using synthetic data, is to take a smaller amount of real data and then generate large amounts of training data – which looks similar, and without the privacy concerns.
Eliminating Bias
Thus, an ML model trained on data derived from the real world will necessarily absorb the inequities of that world. Absent intervention, this embedded bias will not only persist after training but likely amplify any inequity in domains that the model informs, such as healthcare and hiring. Recent results in data science have led to the development of algorithms, such as FairIJ, to mitigate embodied inequity in data, as well as for model refinement strategies, such as FairReprogram, to mitigate this as well.
Overfitting and Underfitting
If an ML model learns information from the sample data that’s irrelevant to solving the problem at hand – what statisticians might refer to as ‘noise’ – then it’s overfitting the training data. Overfitting is the flipside of underfitting: it happens when an ML model is trained incorrectly or not enough.