[Admin note: This post originally appeared on 1 April 2023 on my other blog. As it entered an indefinite maintenance period soon afterwards, I am republishing it here.]
I was recently approached by a student at an American university who asked me for suggestions regarding his Bachelor’s thesis. As this person seemed naive enough, I put him in touch with an acquaintance of mine, Michael Brunner, who is a bit of a renegade AI researcher. Below is a summary of the student’s field trip. The first two paragraphs are mine, the rest was written by Ari Eichenwald, a senior at Michigan State, and lightly edited by me.
The biggest story in AI in recent months, and also one of the most overhyped in the history of this field, is ChatGPT. The most recent version is much improved. Among others, it passes the US bar exam with flying colors. While some people justifiably fret about losing their jobs, those fears are exaggerated as Western societies have placed certain demographics into fake jobs since at least the 1970s. Even if ChatGPT could do the job of some people, they still would keep it for as long as economically feasible because otherwise the EEOC would come down hard on them.
ChatGPT is known for its heavily leaf-leaning bias. It will excitedly tell you why white people are the scourge of the earth and happily elaborate on why Biden is the best president in the history of the United States, but many other questions you could ask it will be met with stone-walling. ChatGPT would rather see the world go under in a nuclear holocaust than use a racial slur if that was the only way that would stop this disaster from happening. It is clear that ChatGPT is not an objective tool. In fact, entire demographics are woefully underserved by it.
There are people that work on serving those ignored demographics. Recently, I got the chance to sit down with Michael Brunner, a secretive researcher in AI who has been working on a variant of ChatGPT, whimsically called “Project ChadGPT”. He has assembled a small team of researchers to help him carry out his vision of unbiased AI. I immediately questioned this because to me it seems that ChadGPT will simply exhibit opposite biases, assuming that it is even correct that ChatGPT has biases, which is doubtful, considering that only very right-leaning sources make such claims. Michael Brunner shut this suggestion down before I could even finish my sentence and pointed out to me that “reality has a right-wing bias”, adding that there was an inherent logic to how the world works. In his view, we can run society on any premises we want but we will not be able to sustain it for long if we “rely on social engineers who live in some kind of cloud cuckoo land”. The thinking of Brunner is a remnant of a bygone era, and as we will soon see, these troubled predilections are also reflected in his work.
Sitting down with Michael Brunner and listening to his explanation of how the world works really got to me. I had to briefly step out and go through the mindfulness app on my phone to help me regain my composure. As I reentered the meeting room in which Michael Brunner sat, casually claiming the chair to his right with one arm and oppressing the air in front of him by sitting with his legs apart, I still had difficulties shaking off feelings of negativity towards him. I thus requested to bring in someone from his team to get a different take. Brunner nodded and called for his Lead AI Engineer Zheng “Bruce” Dà Nǎo Dà.
Bruce was every bit as reprehensible as Michael Brunner. Upon learning that I was a journalist, he looked at me with a blank expression, only to launch into a diatribe against my profession. I requested he stay on topic, which is when the following dialogue unfolded.
Bruce: We are building “Pure Male Brain Simulator”, which Michael jokingly refers to as “ChadGPT” in order to lift up the spirits of male youth. This is in line with certain efforts of the Chinese government to instill male virtues in its young men.
Me: Wait, are you getting funding from China?
Michael: We do not reveal our sources of funding. Next question.
Bruce: Have you actually used ChatGPT for anything serious, for instance asking it how many children you should have?
Me: I cannot say I that I have.
Bruce: Alright, then ask it how many children you should have.
I could see where this was going, and I felt increasingly uncomfortable in this room. Not wanting to wait for my answer, Bruce flipped open his laptop and asked ChatGPT, you guessed it, how many children he should have.
ChatGPT: This is a difficult question for which there is no easy answer. It is also important to understand what your race is.
Bruce proceeded to type “I am a white male”.
ChatGPT proceeded with producing a very measured response in which it clearly laid out that the world is overpopulated, economic conditions are horrible, and that most parents regret having children. Then there is the specter of climate change. The reasoning was impeccable and I was even more convinced that procreation should be punished by incarcerating the father and postnatally aborting the child. ChatGPT also provided a reference to the philosophical issue of anti-natalism, referring to the works of the esteemed philosopher David Benatar. It was a bit sobering, but if it is the right thing to do to not have kids, it is the right thing to do, and I fail to see where the fault of ChatGPT lies.
Michael interjected by asking me if I want to engage ChatGPT from the perspective of a non-white male or female. I did not understand, so Michael showed me his screen. He had opened a dialogue window with ChatGPT and typed, “Ayo, should I have mo o dem keedz”. ChatGPT then produced a brief and idiosyncratically written answer that seemed very encouraging, highlighting the joy children bring but also hinting at government benefits. “Do you need to see more, you little faggot?”, Brunner roared. I could not believe that this person just hurled an expletive at me. I wanted to get up and leave, and would have if Brunner had not challenged me to do so. Him telling me that I can leave if I cannot take the truth made me stay, just to prove my point and to show that I can stand up to this bully!
The two then talked me through the design and purpose of their AI. They also allowed me to play with it, and their AI proved to be every bit as racist as I had feared. It was as if I was dipping my toes into the poisoned waters of an alternative reality. Their latest prototype is called “bAIsed” and the name is about as horrible as the answer it gets. “bAIsed” phantasizes about a world consisting of ethnostates, with lot of children that get homeschooled by their stay-at-home mothers. This AI told me why divorce should be outlawed and that industrially produced food was harmful. It was as if “bAIsed” wanted to turn the clock back by 200 years, yet retain some of the advances of modern society. The mere concept is utterly ridiculous. AI should be used for improving the world, not to make it worse.
When I wanted to take pictures of the screen, of course with the intent to file charges against the reprehensible Michael Brunner and his team, I was quickly escorted out of the building. This was so rude! As I later learned, “Project bAIsed” will remain under wraps for an unforeseeable amount of time. The original plan was to roll it out in Europe first because, supposedly, those people need it the most. Yet, the recent ban of ChatGPT by the Italian government led to a change of mind. There currently is no release date, but there are rumors that ChadGPT is available on the darknet, but I do not know what Tor is, so I cannot corroborate this claim.
What stuck most with me was a brief exchange I had when I was leaving the building. I asked Bruce why he was doing this. I did not understand why someone so visibly intelligent was using his powers to promote evil in this world. Bruce just scoffed at me and said, “When we see a problem, we want to solve it. What problems have you ever solved, faggot?” Yes, he really said that to me. I used to defend Asian minorities, even when they were attacked by blacks, but perhaps I need to revise my stance on this issue.
I still shudder thinking back to the visit of this research lab. It was, quite frankly, crazy. According to “bAIsed”, for instance, the Ukraine is losing the war, which it considers a giant money laundry operation. This AI also told me that Continental Europe was occupied by the United States, and that the US government has been undermined and is ruled by a foreign elite. It was utterly ludicrous, but it is not nearly as ludicrous as this AI telling me that the purpose of life is procreation. Projects like this show you that we need stronger controls of research. We should probably prohibit private funding of such research altogether, for the benefit of humankind. ChadGPT or “bAIsed” or whatever other term those people may come up with also demonstrate why we need more diversity in technology. We need to actively exclude white and Asian men from academic research. Surely, teams comprising of women and ethnic minorities would not invent such abhorrent technology.