Meta Lets Candidates Bring AI to the Coding Interview

& Why Thousands of ChatGPT Chats Just Went Public

Meta Breaks the Mold: Letting Job Candidates Use AI During Coding Interviews

Meta is piloting a new hiring experiment called “AI‑Enabled Interviews”, giving candidates access to an AI assistant during coding tests. The idea is to better reflect real developer workflows while making LLM‑based cheating less effective.

An internal post confirmed Meta is recruiting employees to act as “mock candidates” to help refine the process. Unlike Amazon, which disqualifies candidates caught using AI, Meta sees this as a natural extension of its workplace tools—especially as Mark Zuckerberg predicts AI will reach mid‑level engineer capabilities by 2025.

Meta also plans to use AI internally to speed up recruiting, from generating coding prompts to automating skills assessments. While AI will play a bigger role in hiring, the company insists human‑to‑human interviews will remain central to the process.

Trillion‑Dollar Titans: Then vs. Now

Credits: Visual Cap

Four hundred years ago, corporate giants made today’s tech powerhouses look small. The Dutch East India Company (VOC) alone peaked at $10.15 trillion (inflation‑adjusted) in 1637—thanks to a government‑backed monopoly on global spice trade and the speculative frenzy of Tulip Mania.

Two other 18th‑century trading behemoths, France’s Mississippi Company and Britain’s South Sea Company, hit $8.35 trillion and $5.52 trillion respectively before collapsing in spectacular bubbles. Their rise and fall became textbook cases of market euphoria gone wrong.

Today’s “Magnificent Seven” tech leaders—Nvidia ($4.2T), Microsoft ($3.79T), and Apple ($3.15T) among them—dominate modern markets, but none yet match history’s biggest valuations. Different century, same story: monopoly power, speculation, and investor frenzy can mint—and unmake—the world’s richest companies.

Thousands of shared ChatGPT conversations—some containing deeply personal details like mental health struggles, addiction histories, and locations—are showing up in Google search results. This exposure comes from ChatGPT’s “Share” feature, which generates public URLs that Google’s crawlers can index.

While OpenAI says chats remain private unless explicitly shared, the platform’s warning doesn’t clearly mention Google indexing. Many users appear unaware that sharing a conversation is essentially publishing it to the open web. Though unintentional, this is a norm that was established in part by Google. When people share public links to files from Google Drive, such as documents with the “Anyone with link can view” setting, Google may index them in Search.

The privacy risks are significant: nearly half of Americans used chatbots as therapists last year, and such conversations have no legal confidentiality protections. Users can delete shared links, but already‑indexed conversations may still linger online.

Interesting, right?….was this email forwarded to you? Click below to get exclusive insights and tech news delivered straight to your inbox!