Everybody wants Artificial Intelligence (AI), and everybody wants it now.
As with most hype, it is worth taking a step back to understand what AI is, and perhaps just as importantly, what it is (or is not) suitable for.
The term “Artificial Intelligence” was coined in 1955 by Professor John McCarthy of Stanford University in the United States, who defined it as “the science and engineering of making intelligent machines”.
The concept has evolved to one in which machines simulate human intelligence in areas such as learning, problem-solving and decision-making.
These are done via algorithms and data analysis – the ability to recognise patterns, as well as provide analysis and insight that mimic the functions associated with human intelligence.
What is a good AI?
Just as how a human brain has certain prerequisites in order to function well, there are various factors that determine success in an AI.
AI algorithms require high-quality and diverse datasets for proper training.
Narrow AI – i.e. where AI is used for one particular thing, e.g. AI for website chatbots – may not require as large a dataset, but still requires quality input.
The basics are important – the data used to train AI must be of sufficient quality and with minimal bias.
Poor input will lead to poor output.
The data itself should be collected in a way that builds on trust, and should adhere to privacy and ethical guidelines.
After data collection, the next steps are calculations – the processing of large datasets and the training of complex models that require very high computational resources.
High-performance processors accelerate the training and inference processes, enabling faster and more efficient computations.
The need for such processing power is the driving force behind the high valuation of Nvidia, one of the top tech companies with a market capitalisation at more than US$1.8 trillion (RM8.6 trillion, at the time of writing).
Dealing with datasets and AI require appropriate skillsets.
These include expertise in machine learning, deep learning and natural language processing.
Researchers and data scientists have to design and optimise algorithms to achieve specific goals, such as classification, regression, clustering or reinforcement learning, depending on the application domain and task requirements.
Specific domain expertise is crucial in understanding the context and utilisation of AI in specific sectors, e.g. healthcare professionals for AI in medicine, financial analysts for AI in finance, etc.
This is important to ensure that mistakes are minimised and results can be utilised in real-world contexts.
All of the above can – and should – only be allowed if there is adherence to ethical principles and regulatory standards.
This is to ensure fairness, transparency, accountability and privacy protection.
Ethical AI frameworks promote responsible AI practices, mitigate biases and address societal implications.
This is the rationale for the European Union introducing its AI Act, the first major legislation that codifies AI protections into law.
The advantages of AI
The primary value of an AI is the ability to process and analyse vast amounts of data at speed far beyond that of the human mind.
This is particularly useful in repetitive or mundane tasks, freeing up human resources to focus on more complex and creative endeavours.
AI can also analyse complex datasets to identify patterns, trends and correlations that may not be apparent to humans.
Future outcomes and trends can be identified by analysing historical data and identifying predictive patterns.
At a more granular level, AI systems can perform tasks with a high degree of accuracy and precision, often surpassing human capabilities.
In fields such as medical diagnosis, radiological image recognition and quality control, AI algorithms can achieve levels of accuracy that are difficult for humans to replicate consistently.
At a more personal level, AI algorithms can tailor experiences and recommendations to individual preferences and behaviours.
In fields such as ecommerce, entertainment and healthcare, personalised recommendations based on AI analysis can enhance customer satisfaction and engagement.
The downside of AI
One of the primary criticisms of AI is the potential for bias; AI learns from historical data, which may contain biases inherent in the data and the collection process.
If this data reflects societal biases or systemic inequalities, AI algorithms can perpetuate, and even exacerbate them.
There are various examples of biased training data leading to discriminatory outcomes in loan approvals, and even court decisions.
The algorithms are also complex with minimal transparency and interpretability.
It is not wrong to say that very few people actually understand what happens with an AI, and for the majority of us, it is very important to be cognisant of what we do not know – there is no room to blindly trust or apportion infallibility to an AI without knowing how it arrives at a decision, especially without accountability, fairness and trust.
The increasing ubiquity, overreliance and dependency on AI can lead to complacency and a lack of critical thinking.
One must not overlook the errors, biases and limitations of AI, such as its inability to make decisions based on moral or value judgements.
AI in healthcare
Given the context above, one can start to imagine the opportunities and limitations of AI in enhancing healthcare delivery.
Healthcare is rich in data: symptoms can be analysed for diagnosis, treatment can be identified at genetic level, AI-powered remote devices can monitor signs, algorithms can analyse large datasets to identify potential drug candidates – the list goes on.
However, it is not a panacea and should be recognised as such.
There is bias within algorithms, data privacy and security concerns, misinterpretation of data within training modules that are not properly validated, overreliance on technology by practitioners, increasing inequality due to cost considerations – once again, the list goes on.
AI is changing the world, even as it is changing in its own nature over time.
There is tremendous potential, but this has to be viewed alongside the challenges of data privacy, bias and regulatory issues to ensure that AI is implemented ethically and responsibly in all its settings, including healthcare.
As I wrote at the start of this column, it is worth taking a step back to understand what AI is, and just as importantly, what it is not.
Dr Helmy Haja Mydin is a consultant lung specialist and CEO of the Social & Economic Research Initiative. For further information, email [email protected]. The information provided is for educational and communication purposes only. The Star does not give any warranty on accuracy, completeness, functionality, usefulness or other assurances as to the content appearing in this column. The Star disclaims all responsibility for any losses, damage to property or personal injury suffered directly or indirectly from reliance on such information.