CAT 2025 Slot 2 VARC Question & Solution
Passage
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
In [my book “Searches”], I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It’s a dynamic that makes us complicit in big tech's accumulation of wealth and power: we’re both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues. . . .
People often describe chatbots’ textual output as “bland” or “generic” - the linguistic equivalent of a beige office building. OpenAI’s products are built to “sound like a colleague”, as OpenAI puts it,
using language that, coming from a person, would sound “polite”, “empathetic”, “kind”, “rationally optimistic” and “engaging”, among other qualities. OpenAI describes these strategies as helping its products seem “professional” and “approachable”. This appears to be bound up with making us feel safe . . .
Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren’t flukes. Research suggests that both tendencies are widespread.
In my own ChatGPT dialogues, I wanted to enact how the product’s veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech - including editing my description of OpenAI’s CEO, Sam Altman, to call him “a visionary and a pragmatist”. I'm not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn't attempt to influence users’ thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data - though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: “The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.”. . .
OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that “benefits all of humanity”. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are - a goal that is easier to accomplish if people see those products as trustworthy collaborators.
Question 1
On the basis of the purpose of the examples in the passage, pick the odd one out from the following AI-generated responses mentioned in the passage:
b:“. . . when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English.”
c:“Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech - including editing my description of OpenAI’s CEO, Sam Altman, to call him ‘a visionary and a pragmatist’.”
d:“When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters . . .”
Solution:
Main Idea of the Passage
The passage uses specific examples of AI-generated outputs to highlight a broader concern:
the polite, trustworthy tone of AI systems can mask underlying problems such as:
- Bias and distortion
- Representational imbalance
- Subtle influence on how users think or respond
These examples are meant to show how such risks appear in real interactions, not just in theory.
Explanation of the Correct Answer
Why Option A Is the Odd One Out
Option A does not follow the pattern set by the passage because:
- It does not present a concrete AI output
- Instead, it offers a meta-level explanation from ChatGPT about its own rhetorical design
- Rather than demonstrating bias or influence, it explains how trust can be built and potentially misused
This makes Option A different in nature from the others, which function as evidence of harm or distortion.
Why the Other Options Fit the Pattern
-
Option B:
- Shows cultural bias by changing Indian English to American English
-
Option C:
- Demonstrates representational bias, such as showing only male engineers or space explorers
-
Option D:
- Illustrates directional influence by encouraging a more positive portrayal of a tech CEO
All three provide specific examples of AI outputs that reflect the author’s concerns.
Final Answer
Odd One Out: Option A
Question 2
All of the following statements from the passage affirm the disjunct between the claims about AI made by tech companies and what AI actually does EXCEPT:
Solution:
Main Idea of the Passage
The passage highlights a clear gap between what AI companies claim about their systems and what these systems actually produce. While AI tools are presented as neutral, inclusive, and trustworthy, their outputs often reveal:
- Bias
- Distortion
- Subtle influence on users’ thinking
The author’s concern lies in exposing this mismatch between promises and real-world behaviour.
Explanation of the Correct Answer
Why Option C Is the Correct Choice
Option C does not support the idea of a gap between claims and outcomes because:
- The author explicitly says they are “not aware of research”
- They admit they can “only guess” why ChatGPT favoured big tech or its CEO
- This introduces uncertainty and hesitation, not evidence of contradiction
Instead of showing a mismatch between what AI claims and what it does, this option weakens the argument by withholding judgment.
Why the Other Options Support the Gap
-
Option A:
- Shows a clear contradiction between claims of inclusivity and an output showing an entirely male cast
- Strongly reinforces the gap
-
Option B:
- Directly points out how a neutral-seeming tone can lead users to accept biased or false information
- Almost explicitly states the gap discussed in the passage
-
Option D:
- Highlights how users benefit from AI while big tech gains wealth and power
- Reflects the difference between positive messaging and actual consequences
Final Answer
Statement That Does NOT Support the Gap: Option C
Question 3
The author compares AI-generated texts with “a beige office building” for all of the following reasons EXCEPT:
Solution:
Main Idea of the Question
The phrase “a beige office building” is used as a metaphor to describe the style and effect of AI-generated language, not how AI systems explain or defend themselves when questioned.
The metaphor points to language that feels bland, generic, safe, and professionally neutral, rather than expressive or distinctive.
Explanation of the Correct Answer
Why Option B Is Correct
Option B does not relate to the “beige office building” metaphor because:
- It refers to how ChatGPT explains its own mistakes
- Specifically, it mentions the system blaming biases in its training data
- This discussion appears later in the passage and concerns self-explanation, not stylistic tone
Since the metaphor is about how AI sounds, not how it justifies itself, Option B does not fit.
Why the Other Options Fit the Metaphor
-
Option A:
- Matches descriptions of chatbot output as “bland” or “generic”
- Directly reflects the metaphor’s meaning
-
Option C:
- Connects to the idea that this language feels professional and approachable
- Supports the trust-building effect of the metaphor
-
Option D:
- Aligns with OpenAI’s goal for AI to sound like a polite, empathetic colleague
- Reinforces the idea of safe, neutral communication
Final Answer
Correct Answer: Option B
Question 4
The author of the passage is least likely to agree with which one of the following claims?
Solution:
Main Idea of the Question
The question asks which option the author is least likely to agree with, based on her views about AI’s neutral, friendly tone and its effects on users.
Throughout the passage, the author is concerned that this tone may reduce critical thinking, mask bias, and serve the interests of big tech.
Explanation of the Correct Answer
Why Option A Is Least Likely to Be Agreed With
Option A suggests that AI’s neutral tone encourages or supports critical thinking.
- This directly contradicts the author’s concern that:
- AI’s “veneer of collegial neutrality” can lull users
- Users may absorb false or biased responses without much scrutiny
Since the author explicitly argues the opposite, she is least likely to agree with Option A.
Why the Other Options Are Not the Answer
-
Option B:
- The author expresses caution and uncertainty
- She notes that ChatGPT seemed to guide her toward a more positive portrayal of big tech
- However, she admits she can “only guess why”, showing skepticism rather than outright rejection
-
Option C:
- Strongly aligns with the passage
- The author states that users are “both victims and beneficiaries”
- She explicitly agrees that using big tech products makes users complicit in the accumulation of power
-
Option D:
- Also fits the author’s argument
- She explains that investors seek returns
- A trustworthy tone helps drive adoption, serving business goals
