|
The new MacBook Air and MacBook Pro models feature a keyboard change that was easy to miss during Apple's announcements last week.
|
|
Windows 10 support ended back in October. Here's how to keep access to Windows 10 security updates without spending a dime.
|
|
The UK government is working on a controversial data bill that would allow AI companies like Google and OpenAI to train their models on copyrighted materials without consent. However, following a two month consultation, it looks like passage of the law will be delayed. "Copyright is going to be kicked down the road," a person with knowledge of the matter told The Financial Times.
Responses by stakeholders during the consultation period weren't favorable to any of the government's proposed ideas for use of copyrighted materials, the FT's sources said. There's no expectation now that an AI bill will be part of the King's Speech set for May this year.
As a result, Ministers have decided to go back to the drawing board and spend more time exploring other options. The House of Lords Communications and Digital Committee called on the government to develop a licensing-first regime "underpinned by robust transparency that safeguards creators' livelihoods while supporting sustainable AI growth."
The UK parliament's preferred position on the bill (also argued by tech giants like Google) has been that copyright holders need to formally opt-out if they don't want their materials used to train AI models. However, publishers, filmmakers, musicians and others have said that this would be impractical and an existential threat to the UK's creative industries.
The House of Lords took the side of artists and introduced an amendment that would require tech companies to dis
|
|
Large language models (LLMs), the algorithmic platforms on which generative AI (genAI) tools like ChatGPT are built, are highly inaccurate when connected to corporate databases and becoming less transparent, according to two studies.
One study by Stanford University showed that as LLMs continue to ingest massive amounts of information and grow in size, the genesis of the data they use is becoming harder to track down. That, in turn, makes it difficult for businesses to know whether they can safely build applications that use commercial genAI foundation models and for academics to rely on them for research.
To read this article in full, please click here
|
|