|
An SEC filing has revealed more details on a data breach affecting 23andMe users that was disclosed earlier this fall. The company says its investigation found hackers were able to access information from 0.1 percent of its userbase, or the accounts of about 14,000 of its 14 million total customers, TechCrunch notes. On top of that, the attackers were able to exploit 23andMe's opt-in DNA Relatives feature to access "profile information about other users' ancestry." 23andMe hasn't said how many of these users were affected. Hackers posted information from both groups online.
When the breach was first revealed in October, the company said its investigation "found that no genetic testing results have been leaked." According to the new filing, the data "generally included ancestry information, and, for a subset of those accounts, health-related information based upon the user's genetics." All of this was obtained through a credential-stuffing attack, in which hacke
|
|
Meta is failing to stop vast networks of people using its platform to promote child abuse content, a new report in The Wall Street Journal says, citing numerous disturbing examples of child exploitation it uncovered on Facebook and Instagram. The report, which comes as Meta faces renewed pressure over its handling of children's safety, has prompted fresh scrutiny from European Union regulators.
In the report, The Wall Street Journal detailed tests it conducted with the Canadian Centre for Child Protection showing how Meta's recommendations can suggest Facebook Groups, Instagram hashtags and other accounts that are used to promote and share child exploitation material. According to their tests, Meta was slow to respond to reports about such content, and its own algorithms often made it easier for people to connect with abuse content and others interested in it.
For example, the Canadian Centre for Child Protection told the paper a "network of Instagram accounts with as many as 10 million followers each has continued to livestream videos of child sex abuse months after it was reported to the company." In another disturbing example, Meta initially declined to take action on a user report about a public-facing Facebook Group called "Incest." The group was eventually taken down, along with other similar communities.
In a lengthy update
|
|
The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here's how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.
A quick history of watermarking
Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company's products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.
Though the term "digital watermarking" wasn't coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a "notch filter" to block the audio signal at 1 kHz in
|
|