Operation ChatGPT – a fair go too far (Part 3)

OpenAI logo with globe background.

By Alicia Lucas

Company use of data and “blackbox” algorithms in day to day management

Uber appears to use the data it collects to micromanage drivers, to monitor and advertise to passengers and increasingly to conceal company decision-making.

Uber continuously monitors the driving style of each driver, that is, driver braking, accelerating and speed. An algorithm produces results to let the driver know, for instance, they are braking too much. Uber management has used machine learning in the development of the algorithm and it is one of the management controls Uber uses to influence driver behaviour, according to Assistant Professor McDaid and co-authors. Instead of having a manager who assesses driver behaviour and addresses problems, Uber uses a computer.

Driver pickup behaviour is also being monitored. They match passengers with drivers algorithm suggests potential pickups and if the driver doesn’t respond positively within 30 seconds they risk being allocated less customers. In the past, a driver could quickly estimate trip worth but that is not possible now dynamic pricing has been introduced. Trips were calculated based on distance and duration but the dynamic pricing algorithm uses many variables including location pickup and drop off, time of day, and others undisclosed by Uber. Uber doesn’t need algorithms. Professor McDaid and co-authors also explain it can hire more drivers to increase competition with no negative consequence to the company. Only drivers will suffer lower payments.

Could the algorithm for matching passengers with drivers eventually become something akin to a social credit system? China uses this system to judge and adjust peoples behaviour. At the moment the algorithm considers passenger and driver ratings along with impersonal factors such as location, estimated arrival time and traffic conditions identify Assistant Professor McDaid and co-authors. The ratings are simply a number but behaviour may be behind that number. Could behaviour be explicitly reported on and included in the algorithm in the future in alignment with the interest in mental health?

Passenger monitoring outside of information collected when booking a driver appears typical of other big tech companies. Trackers, commonly “cookies” but there are other types, are also used by being put onto people’s mobile phones to collect information such as technology being used, for instance, phone type, and marketing, for example, provide for targeted advertisements.

The models run using algorithms are known as “blackbox models.” Models give an answer but the inner workings are obscure. Uber may not even identify all the inputs. Additionally, the answer can appear vague. For instance, dynamic pricing leads to the amount paid by a passenger and the fee given to Uber varying from trip to trip despite being similar in distance and even the fee the driver receives varies independently to the passenger fare. Passengers may like Uber because the trip fare is provided up front but does the invisibility of how a trip is priced concern them?

While ChatGPT isn’t a fully functioning mental health related business it already collects data and uses “blackbox” models that can be built on to carrying out day to day operations if it chooses to.

Like Uber, monitoring of companion chatbot app users or potential users tracking with cookies and similar technologies (such as pixels and local storage) is utilised by OpenAI on both the Tracking with cookies, or other equivalents, occurs when using ChatGPT and also when on the OpenAI web pages. Trackers are identified as strictly necessary or used for such things as supporting marketing. Tracking can be undertaken to identify such information as the computer or mobile phone in use, date and social media website a user came from. The European Union requires website owners to ask if non-necessary cookies can be put on a users device, in Australia there is no requirement to do so.

To give some idea of the amount of trackers used in the mental health area, Greengard referred to a 2024 Mozilla report that stated researchers counted at least 24,000 data trackers within a minute of use of an AI companion app while the UK’s Privacy International found greater than 75% of 139 online mental health websites allowed third party marketing trackers in 2019.

ChatGTP is a “blackbox” generative machine learning model. Similar to Uber’s models, an answer is calculated but no one is able to understand the complex detail of how the answer is arrived at. The model may calculate the right answer but no one knows on what basis. The model may calculate the wrong answer and hopefully someone realises.

Scientists have always used “blackbox” models when needed but have also emphasised gathering the right data to better understand the nature of a problem. Once sufficient, appropriate data is available, researchers can build models or undertake statistical analyses targeted to inform the question under consideration resulting in improved interrogation and understanding.

Blackbox” models used in diagnosing mental health such as the early detection of problems should immediately raise alarm bells as models cannot be checked or understood. Challenging the calculation made may not be possible by the user or a mental health professional. It seems it could become the model’s calculation against the word of the user. What happens if a user ignores the recommendation of a model to see a therapist available through ChatGPT? Could there be legal consequences for either the person or a therapist?  For example, if the model detects the user is about to seriously self-harm and a therapist does not follow up will they be responsible if the person hurts themselves? If the model calculations are shown to be wrong will they be deleted or will they follow a person around for the rest of their life?

ChatGPT – future

OpenAI and ChatGPT are still in the early days but with talk of markets and recent restructuring to promote investment it does seem the company is looking to monetise ChatGPT. ChatGPT might become a collection of Uber-like businesses such as mental health, medical health, marketing, and shops like Big Tech Amazon including pharmacies. Would these business also interact? Could any health advice given be also influenced by the what the shops are selling?

There are many reports of OpenAI turning to adult erotica as a way of keeping competitive. Elon Musk’s Grok chatbot already has a “18 plus” mode and a “sexy” avatar. ChatGPT’s “adult” mode will become available once age verification is being used according to the CEO. Making erotica available raises the question as to why companies with chatbots of supposedly immense intellectual power need to resort to base human emotions to promote business? Will gambling also be added to ChatGPT? Leaving alone the ethics of digitally identifying young people who may not really understand consent and its consequences, do people and businesses really want to conduct business with a company that they are not quite sure what it will not do to remain competitive?

Australia – a fair go too far

Australians are known for giving everyone a fair go. Uber has managed to reach a market dominant position in Australia. Is that because Uber had a seemingly bottomless pit of money to draw from? Would OpenAI also be able to out compete Australian businesses in its chosen areas because of a similar bottomless pit? When is a fair go really leading to all Australian’s being tied up into giving more money to overseas companies who are already very wealthy? A fair go shouldn’t be easily given away.

Momenta are trying to produce “driver-less” cars and those planned for Germany are identified as autonomous level 4, cars only “targeting full autonomy”. That is, human intervention may still be needed to avoid accidents. No world-wide classification of these types of cars exists. Waymo, used in USA, is also classed as Level 4 and doesn’t need a driver in the car. However, this is only in selected areas where the company’s remote human operators can maintain a link and intervene when needed and the roads in the area are so well known surprises are unlikely to interrupt the journey.

There is no accepted worldwide definitions of what have been considered misleading descriptions such as “assisted driving”, “autonomous”, “autopilot” or “driver-less” cars. Fatal accidents in the USA and China have heightened concerns about descriptors for these car types. Individuals in the cars may have not understood that a car called, for example, autonomous, may actually require human intervention.

Two recent crashes have been mentioned in this context. In April 2025, a Xiamo SU7 was using “autopilot” with a speed of 116 km/hour just before it crashed killing 3 former classmates in China. The “autopilot” disengaged after detecting a barrier and alerted the person in the drivers seat to take control but even with the drivers immediate reaction the 2 seconds weren’t enough to avoid the crash. In August 2025, a Florida, USA, court found Tesla partly guilty for a Tesla car crash that killed a pedestrian and seriously injured her partner. The car was using “Autopilot”. Tesla argued that the person in the car was fully responsible for the accident. The jury disagreed and Tesla needs to pay US $243m in damages. Tesla said they will appeal the decision.

The State Administration for Market Regulation, China, is drafting rules to prevent car manufacturers from implying a car can drive by itself when really a driver is needed in case a problem occurs.

The USA’s National Highway Traffic Safety Administration data indicates an increasing number of accidents involving either cars aiming for no human involvement or those with advanced driver assistance since 2021. The accidents may not be all these types of cars fault. Mark MacCarthy, USA think tank Brookings Institute, advised caution in 2024 as self-driving vehicles may not be safer than cars driven by humans. MacCarthy notes the view held by some that self-driving vehicles may still have accidents, just ones that are different from human driven cars. The New York Times in June 2025 reported on the difficulties of obtaining statistics on driver assisted car accidents in China. Continued examination of accident data over time is needed to assure the improved safety of these car types when compared with human driven cars. Accident data should be examined wherever these cars are available or about to be released. Limitations in the survey data should be explained with the results of analysis.)

* * * * *

Note: OpenAI no longer disclose details of the data used. Data used is partly based on what the company acknowledged it used in GPT-3, an earlier version of its art model. This article specifies the use of data scraped from the internet such as that found in Common Crawl and Wikipedia. Wikipedia, itself, describes business plans, although still requiring further work, and gives a broad template of a plan’s content. Other scraped sites may also have had like content. A newspaper reported on analysis of a similar model’s dataset (Google’s C4) in 2023 and found data from business and industry websites and social media websites such as food52.com and World of Warcraft player forum. C4 also uses Common Crawl. Social media site, Reddit, was used in GPT2 (Bender and Gebru et al. 2021) and OpenAI has now signed an agreement to use Reddit data. It is also based on the production of business plans being a common suggested use of ChatGPT and a search of the internet lists many sites that can help.

 

Previous instalments:

Operation ChatGPT – a fair go too far (Part 1)

Operation ChatGPT – a fair go too far (Part 2)


Keep Independent Journalism Alive – Support The AIMN

Dear Reader,

Since 2013, The Australian Independent Media Network has been a fearless voice for truth, giving public interest journalists a platform to hold power to account. From expert analysis on national and global events to uncovering issues that matter to you, we’re here because of your support.

Running an independent site isn’t cheap, and rising costs mean we need you now more than ever. Your donation – big or small – keeps our servers humming, our writers digging, and our stories free for all.

Join our community of truth-seekers. Donate via PayPal or credit card via the button below, or bank transfer [BSB: 062500; A/c no: 10495969] and help us keep shining a light.

With gratitude, The AIMN Team

Donate Button

1 Comment

  1. Reject AI everywhere. AI is ”1984” brought to higher efficiency – even Blind Freddie can see the opportunities for manipulation and control of citizens considered a threat by any political movement such as Project 2025, ZIONISM, the Only Nutters and especially the separated COALition political parties now searching for a place in Australian politics.

Leave a Reply

Your email address will not be published.


*