Skip to content

Demystifying Artificial Intelligence for Social Services

ChatGPT? DeepSeek? GenerativeAI? What do any of these mean for people in the public or social sector? Speaking to the CEO of DigiHwy Certainty (DHC), Emil Ng, we discuss current trends in Artificial Intelligence and tackle frequently asked questions.

Demystifying AI title image

J: First thing first, could you define some of these tech buzzwords flying around these days in layman's terms?

E: AI in the strictest sense refers to computers capable of autonomous decision-making. Digital tools or software that claim to be AI-powered are usually referring to automation, where the tool can replicate tasks based on pre-programmed logic or rules.

AI just happened to be a sexier term for selling products. Before this, the same marketing gimmick was frequently used with ‘’machine learning’’, another form of automation which heavily relies on probability theory and statistics.

Sometimes you might come across the term generative AI which refers to the ability to create content, usually in text or image format, based on a user's input.

J: What are some of the top concerns or questions social work practitioners tend to have when it comes to AI technologies?

E: Because of how AI has suddenly become mainstream, there is a lot of pressure now for organisations to digitalise and explore how these technologies can be applied in social work settings.

I noticed the enquiries from top management perspectives tend to revolve around how AI can be used to tackle manpower issues. This is a commendable goal, given how the sector is already so overwhelmed and we do believe technology can significantly facilitate extremely tedious or redundant tasks.

However, if we’re talking about using AI to replicate the expertise of a social worker for diagnostics or even predictive exercise, then I would offer a word of caution. What most people believe to be artificial intelligence, i.e., AI chatbots, are LLMs, large-language modelling.

LLMs work by ingesting large volumes of data and regurgitating answers based on the probability of a response being the likeliest response, it may not necessarily provide the most accurate or true response.

Most AI tools are in fact more similar to a statistical model, which is only as reliable as the data it is fed. The current LLMs as we know it are mostly ingesting data from social media or public forums, which means they are vulnerable to all sorts of biases.  

Having said that, I would encourage people in the space to start laying the groundwork to collect good data that is unique to your organisation and begin exploring ways to use these and build more accurate and ethical models yourself.

Social workers are well positioned for this sort of work because of their background in the social sciences, which trains them to build statistical models while factoring biases.

 Quote artwork

Another area of interest among our clients is speech-to-text transcription tools. This is a low-hanging fruit digital tool that any organisation can explore, whether it is for work meetings or transcribing client case notes. The technology is getting more advanced, and you have features that are multi-lingual and can even understand accents.

A caveat though is in the potential risks associated with using third-party tools to process client data. Most commercial tools commit to the bare minimum to pass regulatory requirements, so there may be a need to draft a separate contract with your service provider to secure your data.

Pro tip: Check out scribe.gov.sg to learn more about OGP Singapore’s speech-to-text developments.

 

 

J: These are great food for thought from an organisational risk standpoint, but what about people on the ground?  Could you offer some perspective for social workers or any other frontline staff that may be impacted by these shifts in technology?

Frontliners play a big part in shaping how technology fits actual day-to-day realities. I would say, don’t be caught up by the tech, start with the basics. For example, one of our clients roped us in to help automate their data entry work and submission to government websites. We used a hassle-free, low-cost tech called RPA (robotic processing automation) to automate these tasks and it has been working well since 2022.

But to achieve this, it required the client’s teams to sit down and document each of the steps that the RPA has to take to fill in the data correctly and make submissions. The groundwork to fix processes, test new processes and document them will always be needed before you can attempt any fancy technology.

J: So to wrap up, what’s the most important message you’d like to leave?

E: Even though there may be immense pressure to ride on the tech bandwagon, the field of community care will always be human-centric.

The sweet spot is to have real humans delivering front-facing service while letting computers support backend administration. This still requires the active supervision of skilled practitioners, researchers, and more.

Last but not least, be very suspicious of any products that try to inflate its value with AI!

 


DigiHwy Certainty is a multi-entity partnership spanning Asia Pacific. Bound by a common purpose to uplift societies through honest technology, we provide affordable, accessible, and scalable technology for public and non-profit organisations.

For more information about DHC and its projects, visit www.dhcertainty.org.