Wednesday, 31st January 2024
Forum on Risks to the Public in Computers and Related Systems
ACM Committee on Computers and Public Policy,
Peter G. Neumann, moderator
Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after
each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site – however only a small number of sites are covered at the moment.
The flashlight take you to an analysis of the various trackers etc. that the linked site delivers.
Please let the website maintainer know if
you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
Contents
Offshore Wind Farms Vulnerable to Cyberattacks- Rizwan Choudhury
Tesla Hacked at Pwn2Own Automotive 2024- Sergiu Gatlan
America’s Dangerous Trucks- Frontline
Authorities investigating massive security breach at Global Affairs Canada- CBC
Why the 737 MAX 9 door plug blew out- Lauren Weinstein
Man sues Macy’s, saying false facial recognition match led to jail assault- WashPost
Bugs in our pockets: the risks of client-side scanning- Journal of Cybersecurity Oxford Academic
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training- Arxiv
ERCIM News 136 published – Special Theme: Large Language Models- Peter Kunz
Deepfake Audio of Biden Alarms Experts- Margi Murphy
The Great Freight-Train Heists of the 21st Century- Slashdot
Nightshade: a new tool artists can use to *poison* AI models that scrape their online work- Lauren Weinstein
ChatGPT is leaking passwords from private conversations of users- Ars Technica reader says
Impact of AI on Software Development- Taylor Soper
AI maxim- Lauren Weinstein
Is American Journalism Headed Toward an Extinction-Level Event?- geoff goodfellow
Huge Proportion of Internet Is AI-Generated Slime, Researchers Find- Maggie Harrison
How Beloved Indie Blog ‘The Hairpin’ Turned Into an AI Clickbait Farm- WiReD
Twitter/X says that it has temporarily blocked some searches for Taylor Swift while they try deal with the flood of AI-porn related to her- LW
Taylor Swift, Travis Kelce and a MAGA Meltdown- NYTimes
YOUR PAPERS PLEASE! – Florida House passes bill that would ban children under 16 from social media- Axios
Hawley and the tech CEOs- Lauren Weinstein
Congress and the states want to bring a Chinese-style police state Internet to the U.S.- Lauren Weinstein
iPhone Apps Secretly Harvest Data When They Send Notifications- Thomas Germain
In India, an algorithm declares them dead; they have to prove they’re alive- Steve Bacher
Tech Layoffs Shock Young Workers. The Older People? Not So Much.- NYTimes
Re: Even after a recall, Tesla’s Autopilot does dumb dangerous things- Geoff Kuenning
Re: ChatGPT can answer yes or no at the same time- Amos Shapir
Re: Tesla Drivers in Chicago Confront a Harsh Foe: Cold Weather (Goldberg,- John Levine
One-star rating deserved for apps that allow full-screen ads- Dan Jacobson
Info on RISKS (comp.risks)
Offshore Wind Farms Vulnerable to Cyberattacks (Rizwan Choudhury)
ACM TechNews<[email protected]>
Wed, 31 Jan 2024 11:05:43 -0500 (EST)
Rizwan Choudhury, *Interesting Engineering*, 24 Jan 2024 via ACM TechNews, 31 Jan 2024 Researchers at Canada's Concordia University and the Hydro-Quebec Research Institute studied the cybersecurity risks associated with offshore wind farms, specifically those using voltage-source-converter high-voltage direct-current (VSC-HVDC) connections. In simulations, the researchers found that cyberattacks could cause blackouts or equipment damage by prompting poorly dampened power oscillations that are amplified by the HVDC system and spread to the main grid.
Tesla Hacked at Pwn2Own Automotive 2024 (Sergiu Gatlan)
ACM TechNews<[email protected]>
Fri, 26 Jan 2024 11:19:56 -0500 (EST)
Sergiu Gatlan, *BleepingComputer*, 24 Jan 2024 On the first day of the Pwn2Own Automotive 2024 hacking contest, security researchers hacked a Tesla Modem, collecting awards totaling $722,500 for three bug collisions and 24 unique zero-day exploits. The Synacktiv Team chained three zero-day bugs to obtain root permissions on a Tesla Modem, for which it won $100,000. The team won another $120,000 by hacking a Ubiquiti Connect EV Station and a JuiceBox 40 Smart EV Charging Station using unique two-bug chains, and $16,000 related to a known exploit chain targeting the ChargePoint Home Flex EV charger.
America’s Dangerous Trucks (Frontline)
Gabe Goldberg<[email protected]>
Sun, 28 Jan 2024 12:46:13 -0500
Deadly traffic accidents involving large trucks have surged over the past decade. FRONTLINE and ProPublica examine one gruesome kind of truck accident ”- underride crashes -” and why they keep happening. Trucking industry representatives and the government’s lead agency on traffic safety have said that their top priority is safety. Drawing on more than a year of reporting ”- including leaked documents and interviews with former government insiders, trucking industry representatives, and families of underride crash victims ”- the documentary reveals how, for decades, federal regulators proposed new rules to try to prevent underride crashes. Over and over, pushback from trucking industry lobbyists won the day, leaving drivers of smaller vehicles vulnerable. https://www.pbs.org/wgbh/frontline/documentary/americas-dangerous-trucks/ The risks? Regulatory capture and science denial. Plus a cavalier attitude towards people dying. Stay away from trucks.
Authorities investigating massive security breach at Global Affairs Canada (CBC)
Matthew Kruk<[email protected]>
Tue, 30 Jan 2024 16:41:06 -0700
https://www.cbc.ca/news/politics/global-affairs-security-breach-1.7099290 Canadian authorities are investigating a prolonged data security breach following the "detection of malicious cyber activity" affecting the internal network used by Global Affairs Canada staff, according to internal department emails viewed by CBC News. The breach affects at least two internal drives, as well as emails, calendars and contacts of many staff members. CBC News spoke to multiple sources with knowledge of the situation, including employees who have received instructions on how the breach affects their ability to work. Some were told to stop working remotely as of last Wednesday.
Why the 737 MAX 9 door plug blew out
Lauren Weinstein<[email protected]>
Tue, 30 Jan 2024 10:20:52 -0800
It is now reported that the reason the door plug blew out on that 737 MAX 9 is that Boeing workers at the factory failed to install the necessary bolts to hold it in place. This permitted the plug to gradually move upward out of its slot and then ultimately blow out. This also is the probable reason why that plane had a number of pressure warnings in preceding days, because air would have likely been leaking past the plug as it worked loose. -L [added later: Just to be clear, the actual bolt installation failure may have been by a subsidiary/contractor, but Boeing was responsible in any case since the plane left their factory in that condition. -L ]
Man sues Macy’s, saying false facial recognition match led to jail assault (WashPost)
Jan Wolitzky<[email protected]>
Mon, 22 Jan 2024 19:01:31 -0500
A man was sexually assaulted in jail after being falsely accused of armed robbery due to a faulty facial recognition match, his attorneys said, in a case that further highlights the dangers of the technology's expanding use by law enforcement. Harvey Murphy Jr., 61, said he was beaten and raped by three men in a Texas jail bathroom in 2022 after being booked on charges he'd held up employees at gunpoint inside a Sunglass Hut in a Houston shopping center, according to a lawsuit he filed last week. A representative of a nearby Macy's told Houston police during the investigation that the company's system, which scanned surveillance-camera footage for faces in an internal shoplifter database, found evidence that Murphy had robbed both stores, leading to his arrest. But at the time of the robbery, his attorneys said, Murphy was in a Sacramento jail on unrelated charges, nearly 2,000 miles away. Hours after his sexual assault, prosecutors released him with all charges dropped, his attorneys said. https://www.washingtonpost.com/technology/2024/01/22/facial-recognition-wrongful-identification-assault/
Bugs in our pockets: the risks of client-side scanning (Journal of Cybersecurity Oxford Academic)
Gabe Goldberg<[email protected]>
Tue, 30 Jan 2024 13:26:08 -0500
Our increasing reliance on digital technology for personal, economic, and government affairs has made it essential to secure the communications and devices of private citizens, businesses, and governments. This has led to pervasive use of cryptography across society. Despite its evident advantages, law enforcement and national security agencies have argued that the spread of cryptography has hindered access to evidence and intelligence. Some in industry and government now advocate a new technology to access targeted data: client-side scanning (CSS). Instead of weakening encryption or providing law enforcement with backdoor keys to decrypt communications, CSS would enable on-device analysis of data in the clear. If targeted information were detected, its existence and, potentially, its source would be revealed to the agencies; otherwise, little or no information would leave the client device. Its proponents claim that CSS is a solution to the encryption versus public safety debate: it offers privacy”in the sense of unimpeded end-to-end encryption”and the ability to successfully investigate serious crime. In this paper, we argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance. Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society, while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which CSS can fail, can be evaded, and can be abused. https://academic.oup.com/cybersecurity/article/10/1/tyad020/7590463
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training (Arxiv)
<[email protected]>Thu, 25 Jan 2024 10:31:49 -0500
https://arxiv.org/pdf/2401.05566.pdf "Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety."
ERCIM News 136 published – Special Theme: Large Language Models
Peter Kunz<[email protected]>
Wed, 31 Jan 2024 15:23:57 +0100
A new ERCIM News issue (136) is online with a special theme on Large Language Models (LLMs). This issue features articles on diverse topics, such as LLMs in education and professional training, ethics and fairness in public sector use, knowledge management, information retrieval, software modeling, LLM capability assessment, and advancements like enhanced pre-training efficiency. You can access the issue at https://ercim-news.ercim.eu/
Deepfake Audio of Biden Alarms Experts (Margi Murphy)
ACM TechNews<[email protected]>
Wed, 24 Jan 2024 11:45:53 -0500 (EST)
Margi Murphy, Bloomberg, 22 Jan 2024, via ACM TechNews, 24 Jan 2024 A telephone message containing deepfake audio of U.S. President Joe Biden called on New Hampshire voters to avoid yesterday's Democratic primary and save their votes for the November election. This comes amid rising concerns about the use of political deepfakes to influence elections around the world this year. Audio deepfakes are especially concerning, given that they are easy and inexpensive to create and hard to trace.
The Great Freight-Train Heists of the 21st Century (Slashdot)
Tom Van Vleck<[email protected]>
Sat, 27 Jan 2024 09:15:55 -0500
https://yro.slashdot.org/story/24/01/27/0010210/the-great-freight-train-heists-of-the-21st-century The e-commerce boom "reshaped freight shipping to meet consumer demand, opening vulnerabilities." So crooks are breaking into containers being shipped by freight and stealing the Amazon boxes. [Is this a "computer related RISK"? almost every crime nowadays has a computer nearby. THVV] [It is a probably a computer-related risk, and certainly so if they can get access to the manifests and container IDs. PGN]
Nightshade: a new tool artists can use to *poison* AI models that scrape their online work
Lauren Weinstein<[email protected]>
Mon, 22 Jan 2024 07:22:34 -0800
Note that their project web page at: https://nightshade.cs.uchicago.edu/whatis.html is in what to me is an almost impossible-to-read light font. I assume "poisoning" human readers is not also part of their goal set. -L https://boingboing.net/2024/01/22/nightshade-a-new-tool-artists-can-use-to-poison-ai-models-that-scrape-their-online-work.html
ChatGPT is leaking passwords from private conversations of users (Ars Technica reader says)
Dave Farber<[email protected]>
Wed, 31 Jan 2024 06:03:16 +0900
Impact of AI on Software Development (Taylor Soper)
ACM TechNews<[email protected]>
Mon, 29 Jan 2024 11:36:46 -0500 (EST)
Taylor Soperxo, *GeekWire*, 23 Jan 2024, via ACM TechNews, 29 Jan 2024 An analysis of 153 million lines of code changed by GitClear, a developer analytics tool built in Seattle, found that "code churn," or the percentage of lines thrown out less than two weeks after being authored, is on the rise. It also found that the percentage of "copy/pasted code" is increasing faster than "updated," "deleted," or "moved" code. Said GitClear's Bill Harding, "In this regard, the composition of AI-generated code is similar to a short-term developer that doesn't thoughtfully integrate their work into the broader project."
AI maxim
Lauren Weinstein<[email protected]>
Sun, 21 Jan 2024 10:42:30 -0800
The familiar computing maxim "garbage in, garbage out"—dating to the late 1950s or early 1960s—needs to be updated to "quality in, garbage out" when it comes to most generative AI systems. -L [Maybe it's a minim, not a maxim. PGN]
Is American Journalism Headed Toward an Extinction-Level Event?
geoff goodfellow<[email protected]>
Tue, 30 Jan 2024 11:45:28 -0700
For a few hours last Tuesday, the entire news business seemed to be collapsing all at once. Journalists at Time magazine and National Geographic announced that they had been laid off. Unionized employees at magazines owned by Conde Nast staged a one-day strike to protest imminent cuts. By far the grimmest news was from the Los Angeles Times, the biggest newspaper west of the Washington DC area. After weeks of rumors, the paper announced that it was cutting 115 people, more than 20 percent of its newsroom. [News is no longer news or even new. AI is just one under-miner of honest journalism. Money is also driving the demise. The more biased journalism becomes, the more ads either go away or pile on, depending on the bias. The money for Superbowl ads is something like $7M for 30 seconds. The money for Superbowl tickets is approaching $10K per ticket, especially if you want to sit together with anyone else. PGN]
Huge Proportion of Internet Is AI-Generated Slime, Researchers Find (Maggie Harrison)
Dave Farber<[email protected]>
Mon, 22 Jan 2024 07:32:37 +0900
Maggie Harrison, *Futurism*, 19 Jan 2024 https://futurism.com/the-byte/internet-ai-generated-slime [Note: paper has not been peer reviewed.(djf) ]
How Beloved Indie Blog ‘The Hairpin’ Turned Into an AI Clickbait Farm (WiReD)
Lauren Weinstein<[email protected]>
Fri, 26 Jan 2024 14:50:54 -0800
https://www.wired.com/story/plaintext-hairpin-blog-ai-clickbait-farm/
Twitter/X says that it has temporarily blocked some searches for Taylor Swift while they try deal with the flood of AI-porn related to her
Lauren Weinstein<[email protected]>
Sun, 28 Jan 2024 08:07:03 -0800
Also: If Taylor Swift Can't Defeat Deepfake Porn, No One Can There's also word that the estate of legendary comedian George Carlin is suing over a special that reportedly used an AI recreation of him. -L https://www.wired.com/story/taylor-swift-deepfake-porn-artificial-intelligence-pushback/
Taylor Swift, Travis Kelce and a MAGA Meltdown (NYTimes)
Monty Solomon<[email protected]>
Wed, 31 Jan 2024 09:33:22 -0500
The fulminations surrounding the world’s biggest pop icon-” and girlfriend of KC Chiefs' tight-end Travis Kelce -” reached the stratosphere after Kansas City made it to the Super Bowl. https://www.nytimes.com/2024/01/30/us/politics/taylor-swift-travis-kelce-trump.html
YOUR PAPERS PLEASE! – Florida House passes bill that would ban children under 16 from social media (Axios)
Lauren Weinstein<[email protected]>
Thu, 25 Jan 2024 18:51:55 -0800
These fascist plans would end up requiring ALL USERS to be verified and identified via government IDs, irrespective ot their age, resulting eventually in the ability to track all users' Internet usage in detail. Don't be fooled by the "protect the children" claims. -L https://www.axios.com/2024/01/25/florida-house-bill-social-media-child-ban
Hawley and the tech CEOs
Lauren Weinstein<[email protected]>
Wed, 31 Jan 2024 09:33:49 -0800
It's really something to see Hawley, who should be in prison for his actions on 6 Jan 2023, yelling at the tech CEOs. There's lots wrong with Big Tech, but Congress has no clue how to fix it, and will only make it far worse and more dangerous for children and adults. And this holds for BOTH parties. In this respect they are EQUALLY BAD. -L
Congress and the states want to bring a Chinese-style police state Internet to the U.S.
Lauren Weinstein<[email protected]>
Wed, 31 Jan 2024 09:08:15 -0800
Basically, both parties in Congress—and legislators in both blue and red states—want to turn the Internet into a China-style police state, where all activity is tracked and tied to government IDs. Even if you trust one party not to abuse this, imagine when the other party gets into power! All of this is being leveraged on a "protect the children" basis where the legislative demands would be ineffective at preventing children from accessing the materials of concern, trample on the rights of adults to use the Net, and actually expose children to more risks from abusive parents. That's the bottom line. -L
iPhone Apps Secretly Harvest Data When They Send Notifications (Thomas Germain)
ACM TechNews<[email protected]>
Mon, 29 Jan 2024 11:36:46 -0500 (EST)
Thomas Germain, *Gizmodo*, 25 Jan 2024, via ACM TechNews, 29 Jan 2024 Security researchers at the app development firm Mysk Inc. found that some iPhone apps are using notifications to get around Apple's privacy rules governing the collection of user data. The researchers said the data being collected through notification appears related to analytics, advertising, and tracking users across different apps and devices. The use of notifications for gathering user data also gets around the practice of closing apps to prevent them from background data collection.
In India, an algorithm declares them dead; they have to prove they’re alive
Steve Bacher<[email protected]>
Mon, 29 Jan 2024 12:47:36 -0800
*Rohtak and New Delhi, India:* Dhuli Chand was 102 years old on September 8, 2022, when he led a wedding procession in Rohtak, a district town in the north Indian state of Haryana. As is customary in north Indian weddings, he sat on a chariot in his wedding finery, wearing garlands of Indian rupee notes, while a band played celebratory music and family members and villagers accompanied him. But instead of a bride, Chand was on his way to meet government officials. Chand resorted to the antic to prove to officials that he was not only alive but also lively. A placard he held proclaimed, in the local dialect: “thara foofa zinda hai”, which literally translates to “your uncle is alive”. Six months prior, his monthly pension was suddenly stopped because he was declared “dead” in government records. Under Haryana’s Old Age Samman Allowance scheme, people aged 60 years and above, whose income together with that of their spouse doesn't exceed 300,000 rupees ($3,600) per annum, are eligible for a monthly pension of 2,750 rupees ($33). In June 2020, the state started using a newly built algorithmic system “ the Family Identity Data Repository or the Parivar Pehchan Patra (PPP) database “ to determine the eligibility of welfare claimants. The PPP is an eight-digit unique ID provided to each family in the state and has details of birth and death, marriage, employment, property, and income tax, among other data, of the family members. It maps every family’s demographic and socioeconomic information by linking several government databases to check their eligibility for welfare schemes. The state said that the PPP created “authentic, verified and reliable data of all families”, and made it mandatory for citizens to access all welfare schemes. But in practice, the PPP wrongly marked Chand as “dead”, denying him his pension for several months. Worse, the authorities did not change his “dead” status even when he repeatedly met them in person. [...] https://www.aljazeera.com/economy/2024/1/25/in-india-an-algorithm-declares-them-dead-they-have-to-prove-theyre
Tech Layoffs Shock Young Workers. The Older People? Not So Much. (NYTimes)
Monty Solomon<[email protected]>
Wed, 31 Jan 2024 09:34:54 -0500
The industry’s recent job cuts have been an awakening for a generation of workers who have never experienced a cyclical crash. https://www.nytimes.com/2023/01/20/technology/tech-layoffs-millennials-gen-x.html
Re: Even after a recall, Tesla’s Autopilot does dumb dangerous things (The Washington Post)
Geoff Kuenning<[email protected]>
Wed, 24 Jan 2024 18:15:43 -0800
I was completely unimpressed by the Washington Post article on Tesla's autosteering feature. Cancel that: I was disgusted. I am hardly a Tesla fan. But the author of the article complained that the automatic STEERING feature blew through stop signs. No duh. My Kia Niro would do the same thing; steering has nothing to do with controlling speed. Anybody who expects a steering feature to recognize speed bumps, stop signs, etc. is far too stupid to operate an automobile, let alone write a *WashPost* column on technology.
Re: ChatGPT can answer yes or no at the same time (RISKS-34.04)
Amos Shapir<[email protected]>
Tue, 23 Jan 2024 11:26:30 +0200
This item, as well as the next one about Tesla's Autopilot, show a strangely ignored fact: These systems are simply not ready for public use. Would you accept an accounting system which makes simple calculation errors, or a search application which invents nonexistent results rather than seek them?
Re: Tesla Drivers in Chicago Confront a Harsh Foe: Cold Weather (Goldberg, RISKS-34.05)
“John Levine”<[email protected]>
21 Jan 2024 09:32:41 -0500
> In freezing temperatures, the batteries of electric vehicles can be less > efficient and have shorter range, a lesson many Tesla drivers in Chicago > learned this week. There is an old joke that we are lucky the car industry grew up in Detroit rather than in Miami. Otherwise every time it snowed, all cars would come to a halt. Now we know it's true!
One-star rating deserved for apps that allow full-screen ads
Dan Jacobson<[email protected]>
Sat, 27 Jan 2024 11:57:12 +0800
The ads on my phone have two sizes, 1) A few lines at the bottom of the screen, and 2) Full screen. The full screen ones, no matter what app they appear in, these days all say things like "press to continue" or "press for next step". I.e., fooling the user into thinking it is the app doing the talking. With the "few lines at the bottom of the screen" ads, no matter what wild things it says, we still know it is just an ad, because the babble appears in the ad spot. So when apps get "one star ratings" it is often due to the ads in the apps, not the apps themselves. But they are still deserved, due to the developer taking the risk to allow full screen ads.
Please report problems with the web pages to the maintainer
Top