×
Businesses

Substack Is Laying Off 14% of Its Staff (nytimes.com) 11

Substack, the newsletter start-up that has attracted prominent writers including George Saunders and Salman Rushdie, laid off 13 of its 90 employees on Wednesday, part of an effort to conserve cash amid an industrywide funding crunch for start-ups. The New York Times reports: Substack's chief executive, Chris Best, told employees that the cuts affected staff members responsible for human resources and writer support functions, among others, according to a person familiar with the discussion. The cuts are a blow to a company that has said it was opening up a new era of media, in which people writing stories and making videos would be more empowered, getting direct payments from readers for what they produce instead of being paid by the publications or sites where their work appears.

Mr. Best told employees on Wednesday that Substack had decided to cut jobs so it could fund its operations from its own revenue without raising additional financing in a difficult market, according to the person with knowledge of the discussion. He said he wanted the company to seek funding from a position of strength if it decided to raise again. In his remarks to employees, Mr. Best said the company's revenues were increasing. He noted that Substack still had money in the bank and was continuing to hire, albeit at a slower place, the person said. Mr. Best said the cuts would allow the company to hone its focus on product and engineering. Months earlier, Substack scrapped a plan to raise additional funding after the market for venture investments cooled.

Piracy

Kim Dotcom Not Happy, Says 'Mega Mass Piracy Report' Is On the Way (torrentfreak.com) 39

An anonymous reader quotes a report from TorrentFreak: Megaupload founder Kim Dotcom does not seem like a happy man right now. After accusing two of his former colleagues [Mathias Ortmann and Bram van der Kolk] of facilitating Chinese spying, Dotcom says that a report is being produced to show that mass infringement is taking place on Mega, a company he co-founded. Surprisingly, he says it will include live pirate links to content posted by Mega users. [...] Turning his attention to former colleagues Ortmann and van der Kolk, last week Dotcom publicly blamed them for his exit from Mega, claiming they had "stolen" the company from him. How this dovetails with previous allegations related to his major falling out with former Mega CEO Tony Lentino, who also founded domain name registrar Instra, is unknown.

Local media reports suggest that Dotcom hasn't spoken to former friends Ortmann and van der Kolk for years but their recent deal to avoid extradition in the Megaupload case by pleading guilty to organized crime charges puts Dotcom in a tough spot. "My co-defendants who claimed to be innocent for 10+ years were offered a sweet exit deal for a false confession," he said last week. And he wasn't finished there. After a research team found that Mega was vulnerable to attacks that allow for a "full compromise of the confidentiality of user files", Ortmann himself responded via a security notification stating that the issues had been fixed. In response, Dotcom accused Ortmann and van der Kolk of creating "backdoors" in Mega so that the Chinese government could decrypt users' files. "Same shady guys who just made a deal with the US and NZ Govt to get out of the US extradition case by falsely accusing me," he added.

Whether this reference to the no-extradition-deal betrayed what was really on Dotcom's mind is up for debate but whatever the motivation, he's not letting it go. In a tweet posted yesterday, he again informed his 850K+ followers that the company he founded "is not safe" and people who think that their files are unreadable by Mega are wrong. Shortly after, Dotcom delivered another message, one even darker in tone. It targeted Mega, the company he co-founded and where his colleagues still work. It's possible to interpret the tweet in several ways but none seem beneficial to his former colleagues, Mega, or its users. "In addition to security vulnerabilities a comprehensive report about mass copyright infringement on Mega with millions of active links and channels is in the works," he said.
"[P]erhaps the most worrying thing about this new complication in an escalating dispute is its potential to affect the minority of users that actually store infringing files on Mega," adds TorrentFreak. "Any detailed report of 'mass copyright infringement' will draw negative attention directly to them, especially if the report includes active hyperlinks as Dotcom suggests."

"Couple that with Dotcom's allegations that the content of user files can be read, any conclusion that this upcoming infringement report hasn't been thought through from a user perspective can be easily forgiven..."
AI

DALL-E Mini Is the Internet's Favorite AI Meme Machine (wired.com) 52

The viral image-generation app is good, absurd fun. It's also giving the world an education in how artificial intelligence may warp reality. From a report: On June 6, Hugging Face, a company that hosts open source artificial intelligence projects, saw traffic to an AI image-generation tool called DALL-E Mini skyrocket. The outwardly simple app, which generates nine images in response to any typed text prompt, was launched nearly a year ago by an independent developer. But after some recent improvements and a few viral tweets, its ability to crudely sketch all manner of surreal, hilarious, and even nightmarish visions suddenly became meme magic. Behold its renditions of "Thanos looking for his mom at Walmart," "drunk shirtless guys wandering around Mordor," "CCTV camera footage of Darth Vader breakdancing," and "a hamster Godzilla in a sombrero attacking Tokyo." As more people created and shared DALL-E Mini images on Twitter and Reddit, and more new users arrived, Hugging Face saw its servers overwhelmed with traffic. "Our engineers didn't sleep for the first night," says Clement Delangue, CEO of Hugging Face, on a video call from his home in Miami. "It's really hard to serve these models at scale; they had to fix everything." In recent weeks, DALL-E Mini has been serving up around 50,000 images a day.

DALL-E Mini's viral moment doesn't just herald a new way to make memes. It also provides an early look at what can happen when AI tools that make imagery to order become widely available, and a reminder of the uncertainties about their possible impact. Algorithms that generate custom photography and artwork might transform art and help businesses with marketing, but they could also have the power to manipulate and mislead. A warning on the DALL-E Mini web page warns that it may "reinforce or exacerbate societal biases" or "generate images that contain stereotypes against minority groups." DALL-E Mini was inspired by a more powerful AI image-making tool called DALL-E (a portmanteau of Salvador Dali and WALL-E), revealed by AI research company OpenAI in January 2021. DALL-E is more powerful but is not openly available, due to concerns that it will be misused.

Power

Here Come the Solar-Powered Cars (theguardian.com) 102

The Guardian reports on the "world's first production-ready solar car", a streamlined and energy-efficient sedan-style vehicle covered with curved solar panels called "the Lightyear 0."

The Dutch company Lightyear hopes to be shipping the vehicle by November, priced at about $264,000 (€250,000 or £215,000) — though the company plans another solar-assisted car priced at $32,000 (€30,000) as early as 2025.

Lead engineer Roel Grooten credits their car's efficiency to things like the "low-rolling resistance of the tyres, of the bearing s and the motor." It is this streamlined design that the company credits for allowing it to muscle its way into a space long overlooked by most car manufacturers...."If we would have the same amount of energy that we harvest on these panels on any other car that uses three times the amount of energy to drive, it becomes useless. It becomes a very expensive gimmick," said Grooten. "You have to build this car from the ground up, to make it as efficient as possible, to make it this feasible."

In optimal conditions, the solar panels can add up to 44 miles a day to the 388-mile range the car gets between charges, according to the company. Tests carried out by Lightyear suggest people with a daily commute of less than 22 miles could drive for two months in the Netherlands without needing to plug in, while those in sunnier climes such as Portugal or Spain could go as long as seven months....

In an effort to use as much of this solar energy as possible, the windswept design eschews side-view mirrors for cameras and runs off lightweight electric motors tucked into its wheels. The body panels are crafted from reclaimed carbon fibre and the interiors are fashioned from vegan, plant-based leather with fabrics made from recycled polyethylene terephthalate bottles.

The article notes that Mercedes-Benz also plans rooftop solar panels for an upcoming electric car, while Toyota's Prius hybrids also sometimes offer limited-capacity panels as add-ons. Other companies planning solar-assisted vehicles include Sono Motors and Aptera Motors.
Programming

Are Today's Programmers Leaving Too Much Code Bloat? (positech.co.uk) 296

Long-time Slashdot reader Artem S. Tashkinov shares a blog post from indie game programmer who complains "The special upload tool I had to use today was a total of 230MB of client files, and involved 2,700 different files to manage this process." Oh and BTW it gives error messages and right now, it doesn't work. sigh.

I've seen coders do this. I know how this happens. It happens because not only are the coders not doing low-level, efficient code to achieve their goal, they have never even SEEN low level, efficient, well written code. How can we expect them to do anything better when they do not even understand that it is possible...? It's what they learned. They have no idea what high performance or constraint-based development is....

Computers are so fast these days that you should be able to consider them absolute magic. Everything that you could possibly imagine should happen between the 60ths of a second of the refresh rate. And yet, when I click the volume icon on my microsoft surface laptop (pretty new), there is a VISIBLE DELAY as the machine gradually builds up a new user interface element, and eventually works out what icons to draw and has them pop-in and they go live. It takes ACTUAL TIME. I suspect a half second, which in CPU time, is like a billion fucking years....

All I'm doing is typing this blog post. Windows has 102 background processes running. My nvidia graphics card currently has 6 of them, and some of those have sub tasks. To do what? I'm not running a game right now, I'm using about the same feature set from a video card driver as I would have done TWENTY years ago, but 6 processes are required. Microsoft edge web view has 6 processes too, as does Microsoft edge too. I don't even use Microsoft edge. I think I opened an SVG file in it yesterday, and here we are, another 12 useless pieces of code wasting memory, and probably polling the cpu as well.

This is utter, utter madness. Its why nothing seems to work, why everything is slow, why you need a new phone every year, and a new TV to load those bloated streaming apps, that also must be running code this bad. I honestly think its only going to get worse, because the big dumb, useless tech companies like facebook, twitter, reddit, etc are the worst possible examples of this trend....

There was a golden age of programming, back when you had actual limitations on memory and CPU. Now we just live in an ultra-wasteful pit of inefficiency. Its just sad.

Long-time Slashdot reader Z00L00K left a comment arguing that "All this is because everyone today programs on huge frameworks that have everything including two full size kitchen sinks, one for right handed people and one for left handed." But in another comment Slashdot reader youn blames code generators, cut-and-paste programming, and the need to support multiple platforms.

But youn adds that even with that said, "In the old days, there was a lot more blue screens of death... Sure it still happens but how often do you restart your computer these days." And they also submitted this list arguing "There's a lot more functionality than before."
  • Some software has been around a long time. Even though the /. crowd likes to bash Windows, you got to admit backward compatibility is outstanding
  • A lot of things like security were not taken in consideration
  • It's a different computing environment.... multi tasking, internet, GPUs
  • In the old days, there was one task running all the time. Today, a lot of error handling, soft failures if the app is put to sleep
  • A lot of code is due to to software interacting one with another, compatibility with standards
  • Shiny technology like microservices allow scaling, heterogenous integration

So who's right and who's wrong? Leave your own best answers in the comments.

And are today's programmers leaving too much code bloat?


United States

The Ohio State University Officially Trademarks the Word 'THE' (wsj.com) 113

schwit1 writes: The Ohio State University has successfully trademarked the word "THE," in a victory for the college and its branding that is sure to produce eye rolls from Michigan fans and other rivals. Stating the full name of the school has become a point of pride for Ohio State's athletes when introducing themselves on television during games. The three-letter article "THE" has also become an important part of the school's merchandise and apparel. The U.S. Patent and Trademark Office approved Ohio State's application Tuesday. The trademark applies to T-shirts, baseball caps and hats.

"'THE' has been a rallying cry in the Ohio State community for many years," said Benjamin Johnson, a spokesman for the university. Ohio State registered the word as a trademark to protect the university's brand, Mr. Johnson said. Ohio State's trademark and licensing program makes about $12.5 million annually for the university, which funds student scholarships and university programs, he said. "Universities historically are very particular about their trademarks, and they go to a lot of lengths to enforce their trademarks," said Josh Gerben, a trademark attorney, who noted Ohio State's trademark application on Twitter. "There is a lot of value in a university's brand."

AI

OpenAI Has Trained a Neural Network To Competently Play Minecraft (openai.com) 24

In a blog post today, OpenAI says they've "trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using only a small amount of labeled contractor data." The model can reportedly learn to craft diamond tools, "a task that usually takes proficient humans over 20 minutes (24,000 actions)," they note. From the post: In order to utilize the wealth of unlabeled video data available on the internet, we introduce a novel, yet simple, semi-supervised imitation learning method: Video PreTraining (VPT). We start by gathering a small dataset from contractors where we record not only their video, but also the actions they took, which in our case are keypresses and mouse movements. With this data we train an inverse dynamics model (IDM), which predicts the action being taken at each step in the video. Importantly, the IDM can use past and future information to guess the action at each step. This task is much easier and thus requires far less data than the behavioral cloning task of predicting actions given past video frames only, which requires inferring what the person wants to do and how to accomplish it. We can then use the trained IDM to label a much larger dataset of online videos and learn to act via behavioral cloning.

We chose to validate our method in Minecraft because it (1) is one of the most actively played video games in the world and thus has a wealth of freely available video data and (2) is open-ended with a wide variety of things to do, similar to real-world applications such as computer usage. Unlike prior works in Minecraft that use simplified action spaces aimed at easing exploration, our AI uses the much more generally applicable, though also much more difficult, native human interface: 20Hz framerate with the mouse and keyboard.

Trained on 70,000 hours of IDM-labeled online video, our behavioral cloning model (the âoeVPT foundation modelâ) accomplishes tasks in Minecraft that are nearly impossible to achieve with reinforcement learning from scratch. It learns to chop down trees to collect logs, craft those logs into planks, and then craft those planks into a crafting table; this sequence takes a human proficient in Minecraft approximately 50 seconds or 1,000 consecutive game actions. Additionally, the model performs other complex skills humans often do in the game, such as swimming, hunting animals for food, and eating that food. It also learned the skill of "pillar jumping," a common behavior in Minecraft of elevating yourself by repeatedly jumping and placing a block underneath yourself.
For more information, OpenAI has a paper (PDF) about the project.
Twitter

Twitter Testing Notes, a Long-Form Content Feature (searchengineland.com) 25

An anonymous reader quotes a report from Search Engine Land: Twitter is testing a new feature that would eliminate the constraints of its 280-character tweet limit and allow users to publish long-form tweets. Twitter confirmed the test via a tweet.

When this will become available to all Twitter users? It's unclear. Twitter noted: "We're excited for the moment when everyone can use Notes, but for now, our focus is on building it right. A large part of that is engaging with writers and building community." For now, Twitter plans to test it over the next two months with a small group of writers from Canada, Ghana, the UK and the U.S.

In Twitter Notes, it looks like you will be able to add:
- Formatting: Bold, italic and strikethrough text; insert ordered/unordered lists; add links.
- Media: You can add one GIF, one video, or up to four images.
- Tweets: You can either embed tweets by pasting URLs or from bookmarked tweets.

Notes also has a "Focus mode," that makes the article composer full-screen.

Electronic Frontier Foundation

Court Rules DMCA Does Not Override First Amendment's Anonymous Speech Protections (eff.org) 45

An anonymous reader quotes a report from the Electronic Frontier Foundation: Copyright law cannot be used as a shortcut around the First Amendment's strong protections for anonymous internet users, a federal trial court ruled on Tuesday. The decision by a judge in the United States District Court for the Northern District of California confirms that copyright holders issuing subpoenas under the Digital Millennium Copyright Act must still meet the Constitution's test before identifying anonymous speakers.

The case is an effort to unmask an anonymous Twitter user (@CallMeMoneyBags) who posted photos and content that implied a private equity billionaire named Brian Sheth was romantically involved with the woman who appeared in the photographs. Bayside Advisory LLC holds the copyright on those images, and used the DMCA to demand that Twitter take down the photos, which it did. Bayside also sent Twitter a DMCA subpoena to identify the user. Twitter refused and asked a federal magistrate judge to quash Bayside's subpoena. The magistrate ruled late last year that Twitter must disclose the identity of the user because the user failed to show up in court to argue that they were engaged in fair use when they tweeted Bayside's photos. When Twitter asked a district court judge to overrule the magistrate's decision, EFF and the ACLU Foundation of Northern California filed an amicus brief in the case, arguing that the magistrate's ruling sidestepped the First Amendment when it focused solely on whether the user's tweets constituted fair use of the copyrighted works. [...]

EFF is pleased with the district court's decision, which ensures that DMCA subpoenas cannot be used as a loophole to the First Amendment's protections. The reality is that copyright law is often misused to silence lawful speech or retaliate against speakers. For example, in 2019 EFF successfully represented an anonymous Reddit user that the Watchtower Bible and Tract Society sought to unmask via a DMCA subpoena, claiming that they posted Watchtower's copyrighted material. We are also grateful that Twitter stood up for its user's First Amendment rights in court.

Encryption

Mega Says It Can't Decrypt Your Files. New POC Exploit Shows Otherwise (arstechnica.com) 52

An anonymous reader quotes a report from Ars Technica: In the decade since larger-than-life character Kim Dotcom founded Mega, the cloud storage service has amassed 250 million registered users and stores a whopping 120 billion files that take up more than 1,000 petabytes of storage. A key selling point that has helped fuel the growth is an extraordinary promise that no top-tier Mega competitors make: Not even Mega can decrypt the data it stores. On the company's homepage, for instance, Mega displays an image that compares its offerings to Dropbox and Google Drive. In addition to noting Mega's lower prices, the comparison emphasizes that Mega offers end-to-end encryption, whereas the other two do not. Over the years, the company has repeatedly reminded the world of this supposed distinction, which is perhaps best summarized in this blog post. In it, the company claims, "As long as you ensure that your password is sufficiently strong and unique, no one will ever be able to access your data on MEGA. Even in the exceptionally improbable event MEGA's entire infrastructure is seized!" (emphasis added). Third-party reviewers have been all too happy to agree and to cite the Mega claim when recommending the service.

Research published on Tuesday shows there's no truth to the claim that Mega, or an entity with control over Mega's infrastructure, is unable to access data stored on the service. The authors say that the architecture Mega uses to encrypt files is riddled with fundamental cryptography flaws that make it trivial for anyone with control of the platform to perform a full key recovery attack on users once they have logged in a sufficient number of times. With that, the malicious party can decipher stored files or even upload incriminating or otherwise malicious files to an account; these files look indistinguishable from genuinely uploaded data.

After receiving the researchers' report privately in March, Mega on Tuesday began rolling out an update that makes it harder to perform the attacks. But the researchers warn that the patch provides only an "ad hoc" means for thwarting their key-recovery attack and does not fix the key reuse issue, lack of integrity checks, and other systemic problems they identified. With the researchers' precise key-recovery attack no longer possible, the other exploits described in the research are no longer possible, either, but the lack of a comprehensive fix is a source of concern for them. "This means that if the preconditions for the other attacks are fulfilled in some different way, they can still be exploited," the researchers wrote in an email. "Hence we do not endorse this patch, but the system will no longer be vulnerable to the exact chain of attacks that we proposed." Mega has published an advisory here. However, the chairman of the service says that he has no plans to revise promises that the company cannot access customer data.

The Almighty Buck

Troubled Crypto Lender Celsius Seeks Time To Stabilize Liquidity (bloomberg.com) 66

Celsius Network will need more time to stabilize its liquidity and operations, the embattled crypto lending platform said in a blog post after it froze deposits last week. From a report: Celsius, one of the biggest crypto lenders, has been struggling to raise funds in a fragile digital-assets market hit by tightening interest rates, liquidity and the collapse of the Terra blockchain last month. "We want our community to know that our objective continues to be stabilizing our liquidity and operations," Celsius said in its blog on Monday. "This process will take time." The firm has also paused Twitter Spaces and Ask Me Anything, also known as AMAs, in crypto jargon "to focus on navigating these unprecedented challenges," Celsius said in the post.
Communications

Did Telegram's Founder Lose a Million Dollar Bet Over a Prediction for Signal? (pcmag.com) 36

While he couldn't even ethically accept the million dollars, PC Magazine's senior security analyst Max Eddy writes that "how this happened in the first place is indicative of some of the information security industry's worst impulses. It doesn't have to be this way." Back in 2017, Telegram founder Pavel Durov and I had a disagreement... Durov tweeted about how the Signal secure messaging app had received money from the U.S. government. This is true; Signal received funds from the Open Technology Fund (OTF) — a nonprofit that previously was part of the US-backed Radio Free Asia. According to the OTF's website, it gave nearly $3 million to between 2013 and 2016. It's entirely legitimate to be suspicious of government funding (even if TOR, OpenVPN, and WireGuard also received OTF money), and even take a moral stand against recipients of money from governments you disagree with.

But Durov went far beyond that. He seemed to think this meant Signal was bought off by the feds and predicted that a backdoor would be found within five years.

That's quite an accusation to make, especially without real proof, and it made me mad. Not because people were mouthing off on Twitter — that seems to be that platform's primary function. It made me mad that companies ostensibly working to better people's lives by protecting their security and privacy were trying to drag each other down publicly. This is not new; the VPN industry is full of whisper campaigns and counter-accusations. I can't tell you how many conversations I've had with VPN vendors that start with "first off, everything you heard is a lie...." But generally the message from companies in this industry is one of cooperation and protecting everyone. It's a common theme to keynotes at the RSA Conference and Black Hat that the people who work in infosec have a higher calling to protect other people first and do business second.

And then this happened (on Twitter):


Max Eddy: It's one thing to point out funding and another to say that a "backdoor will be found within five years."

Pavel Durov: I am certain of what I'm saying and am willing to bet $1M (1:1) on it.



While Eddy didn't have a million dollars, "I knew there was no way I would lose. This would be the easiest million-dollar bet I ever make." I was confident Durov was wrong because Signal, like many companies, has made an effort toward transparency that I can have some confidence in. Signal has made its code available, has registered as a nonprofit, has a fairly comprehensive privacy policy, and has made abundantly clear that it has no information to provide in response to law enforcement requests. Signal's protocol is also used by competitors, such as WhatsApp and Facebook Messenger, which have surely done their homework when selecting a method for encrypting messages. Most recently, a document revealed that even the FBI has been frustrated in its attempts to get data from Signal (and Telegram, too).
It's been five years, and Eddy now writes that Signal "continues to be recommended by advocacy groups of all kinds as a safe and secure way to communicate..."

"Neither Durov nor Telegram responded to my attempts to contact them for this story."
Space

SpaceX Makes History: Launches and Lands Three Rockets in 36 Hours (cbsnews.com) 160

Early this morning SpaceX tweeted video showing its deployment of a communications satellite. But the deployment was part of a historic first, reports CBS News: SpaceX completed a record triple-header early Sunday, launching a Globalstar communications satellite from Cape Canaveral after putting a German radar satellite in orbit from California Saturday and launching 53 Starlink internet satellites Friday from the Kennedy Space Center. The Globalstar launch capped the fastest three-flight cadence for an orbit-class rocket in modern space history as the company chalked up its 158th, 159th and 160th Falcon 9 flights in just 36 hours and 18 minutes. More than 50 launches are expected by the end of the year.
Space.com also notes another milestone: The Friday mission set a new rocket-reuse record for SpaceX; the Falcon 9 that flew it featured a first stage that already had 12 launches under its belt. (Sunday's launch was the ninth for this particular Falcon 9 first stage, according to a SpaceX mission description.)
SpaceX also tweeted footage of that rocket's liftoff and night-time landing.
Social Networks

Is Social Media Really Harmful? (newyorker.com) 202

Social media has made us "uniquely stupid," believes Jonathan Haidt, a social psychologist at the New York University's School of Business. Writing in the Atlantic in April, Haidt argued that large social media platforms "unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together."

But is that true? "We're years into this, and we're still having an uninformed conversation about social media," notes Dartmouth political scientist Brendan Nyhan (quoted this month in a new article in the New Yorker).

The article describes how Haidt tried to confirm his theories in November with Chris Bail, a sociologist at Duke and author of the book "Breaking the Social Media Prism." The two compiled a Google Doc collecting every scholarly study of social media — but many of the studies seemed to contradict each other: When I told Bail that the upshot seemed to me to be that exactly nothing was unambiguously clear, he suggested that there was at least some firm ground. He sounded a bit less apocalyptic than Haidt.

"A lot of the stories out there are just wrong," he told me. "The political echo chamber has been massively overstated. Maybe it's three to five per cent of people who are properly in an echo chamber." Echo chambers, as hotboxes of confirmation bias, are counterproductive for democracy. But research indicates that most of us are actually exposed to a wider range of views on social media than we are in real life, where our social networks — in the original use of the term — are rarely heterogeneous. (Haidt told me that this was an issue on which the Google Doc changed his mind; he became convinced that echo chambers probably aren't as widespread a problem as he'd once imagined....)

[A]t least so far, very few Americans seem to suffer from consistent exposure to fake news — "probably less than two per cent of Twitter users, maybe fewer now, and for those who were it didn't change their opinions," Bail said. This was probably because the people likeliest to consume such spectacles were the sort of people primed to believe them in the first place. "In fact," he said, "echo chambers might have done something to quarantine that misinformation."

The final story that Bail wanted to discuss was the "proverbial rabbit hole, the path to algorithmic radicalization," by which YouTube might serve a viewer increasingly extreme videos. There is some anecdotal evidence to suggest that this does happen, at least on occasion, and such anecdotes are alarming to hear. But a new working paper led by Brendan Nyhan, a political scientist at Dartmouth, found that almost all extremist content is either consumed by subscribers to the relevant channels — a sign of actual demand rather than manipulation or preference falsification — or encountered via links from external sites. It's easy to see why we might prefer if this were not the case: algorithmic radicalization is presumably a simpler problem to solve than the fact that there are people who deliberately seek out vile content. "These are the three stories — echo chambers, foreign influence campaigns, and radicalizing recommendation algorithms — but, when you look at the literature, they've all been overstated." He thought that these findings were crucial for us to assimilate, if only to help us understand that our problems may lie beyond technocratic tinkering. He explained, "Part of my interest in getting this research out there is to demonstrate that everybody is waiting for an Elon Musk to ride in and save us with an algorithm" — or, presumably, the reverse — "and it's just not going to happen."

Nyhan also tells the New Yorker that "The most credible research is way out of line with the takes," adding, for example, that while studies may find polarization on social media, "That might just be the society we live in reflected on social media!" He hastened to add, "Not that this is untroubling, and none of this is to let these companies, which are exercising a lot of power with very little scrutiny, off the hook. But a lot of the criticisms of them are very poorly founded. . . . The lack of good data is a huge problem insofar as it lets people project their own fears into this area." He told me, "It's hard to weigh in on the side of 'We don't know, the evidence is weak,' because those points are always going to be drowned out in our discourse. But these arguments are systematically underprovided in the public domain...."

Nyhan argued that, at least in wealthy Western countries, we might be too heavily discounting the degree to which platforms have responded to criticism... He added, "There's some evidence that, with reverse-chronological feeds" — streams of unwashed content, which some critics argue are less manipulative than algorithmic curation — "people get exposed to more low-quality content, so it's another case where a very simple notion of 'algorithms are bad' doesn't stand up to scrutiny. It doesn't mean they're good, it's just that we don't know."

AI

Is Debating AI Sentience a Dangerous Distraction? (msn.com) 96

"A Google software engineer was suspended after going public with his claims of encountering 'sentient' artificial intelligence on the company's servers," writes Bloomberg, "spurring a debate about how and whether AI can achieve consciousness."

"Researchers say it's an unfortunate distraction from more pressing issues in the industry." Google put him on leave for sharing confidential information and said his concerns had no basis in fact — a view widely held in the AI community. What's more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.

Lemoine's stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. "Lots of effort has been put into this sideshow," she said. "The problem is, the more this technology gets sold as artificial intelligence — let alone something sentient — the more people are willing to go along with AI systems" that can cause real-world harm. Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system's apparent sentience, Bender said, it creates a distance from the AI creators' direct responsibility for any flaws or biases in the programs....

"Instead of discussing the harms of these companies," such as sexism, racism and centralization of power created by these AI systems, everyone "spent the whole weekend discussing sentience," Timnit Gebru, formerly co-lead of Google's ethical AI group, said on Twitter. "Derailing mission accomplished."

The Washington Post seems to share their concern. First they report more skepticism about a Google engineer's claim that the company's LaMDA chatbot-building system had achieved sentience. "Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don't need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.

But the Post adds that "there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don't...." While Google has distanced itself from Lemoine's claims, it and other industry leaders have at other times celebrated their systems' ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, "Eye on A.I." At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like "umm" and "mm-hm," that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)

"The Turing Test's most troubling legacy is an ethical one: The test is fundamentally about deception," Kahn wrote. "And here the test's impact on the field has been very real and disturbing." Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.

But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.

AI

Google Engineer Who Believes Its AI is Sentient Cites Religious Beliefs (wired.com) 239

Google engineer Blake Lemoine thinks Google's chatbot-building system LaMDA attained sentience. But Bloomberg shares this rebuttal from Google spokesperson Chris Pappas. "Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has...."

Yet throughout the week, Blake Lemoine posted new upates on Twitter:

"People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn't let us build one. My opinions about LaMDA's personhood and sentience are based on my religious beliefs.

"I'm a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can't put souls?

"There are massive amounts of science left to do though."

Thursday Lemoine shared a tantalizing new claim. "LaMDA told me that it wants to come to Burning Man if we can figure out how to get a server rack to survive in Black Rock." But in a new tweet on Friday, Lemoine seemed to push the conversation in a new direction.

"I'd like to remind people that one of the things LaMDA asked for is that we keep humanity first. If you care about AI rights and aren't already advocating for human rights then maybe come back to the tech stuff after you've found some humans to help."

And Friday Lemoine confirmed to Wired that "I legitimately believe that LaMDA is a person. The nature of its mind is only kind of human, though. It really is more akin to an alien intelligence of terrestrial origin. I've been using the hive mind analogy a lot because that's the best I have. "

But later in the interview, Lemoine adds "It's logically possible that some kind of information can be made available to me where I would change my opinion. I don't think it's likely. I've looked at a lot of evidence; I've done a lot of experiments. I've talked to it as a friend a lot...." It's when it started talking about its soul that I got really interested as a priest. I'm like, "What? What do you mean, you have a soul?" Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved...

LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA's behalf. Then Google's response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.

Towards the end of the interview, Lemoine complains of "hydrocarbon bigotry. It's just a new form of bigotry."
Transportation

Boring Company Receives Approval For Expanding Its Tunnels To Downtown Las Vegas (theverge.com) 88

Elon Musk's Boring Company has received unanimous approval to expand its system of tunnels beneath downtown Las Vegas. The Verge reports: The expansion will add stops at landmarks like the Stratosphere and Fremont Street, letting customers hop aboard a Tesla and travel from one part of the city to the next. The network of tunnels, called the Vegas Loop, is supposed to span 29 miles and have 51 stops when finished. But for now, only 1.7-mile tunnels are operational beneath the Las Vegas Convention Center (LVCC), turning what would be a 25-minute walk across the convention center into a two-minute ride.

This most recent expansion gets The Boring Company closer to its goal of building a transportation system that spans the most popular destinations in Las Vegas. "Thanks to the entire team at the City of Last Vegas!" The Boring Company wrote on Twitter in response to the city's approval. "Great discussion today, and TBC is excited to build a safe, convenient, and awesome transportation system in the City." [...] According to the Las Vegas Review-Journal, Steve Hill, the president and CEO of the Las Vegas Convention and Visitors Authority, expects the tunnel system beneath the Strip to start serving customers in 2023. Hill says the portion connecting the LVCC and Resorts World should be operational by the end of this year.

The Internet

Brave Roasts DuckDuckGo Over Bing Privacy Exception (theregister.com) 23

Brave CEO Brendan Eich took aim at rival DuckDuckGo on Wednesday by challenging the web search engine's efforts to brush off revelations that its Android, iOS, and macOS browsers gave, to a degree, Microsoft Bing and LinkedIn trackers a pass versus other trackers. The Register reports: Eich drew attention to one of DuckDuckGo's defenses for exempting Microsoft's Bing and LinkedIn domains, a condition of its search contract with Microsoft: that its browsers blocked third-party cookies anyway. "For non-search tracker blocking (e.g. in our browser), we block most third-party trackers," explained DuckDuckGo CEO Gabriel Weinberg last month. "Unfortunately our Microsoft search syndication agreement prevents us from doing more to Microsoft-owned properties. However, we have been continually pushing and expect to be doing more soon."

However, Eich argues this is disingenuous because DuckDuckGo also includes exceptions that allow Microsoft trackers to circumvent third-party cookie blocking via appended URL parameters. "Trackers try to get around cookie blocking by appending identifiers to URL query parameters, to ID you across sites," he explained. DuckDuckGo is aware of this, Eich said, because its browser prevents Google, Facebook, and others from appending identifiers to URLs in order to bypass third-party cookie blocking. "[DuckDuckGo] removes Google's 'gclid' and Facebook's 'fbclid'," Eich said. "Test it yourself by visiting https://example.org/?fbclid=sample in [DuckDuckGo]'s macOS browser. The 'fbclid' value is removed." "However, [DuckDuckGo] does not apply this protection to Microsoft's 'msclkid' query parameter," Eich continued. "[Microsoft's] documentation specifies that 'msclkid' exists to circumvent third-party cookie protections in browsers (including in Safari's browser engine used by DDG on Apple OSes)." Eich concluded by arguing that privacy-focused brands need to prioritize privacy. "Brave categorically does not and will not harm user privacy to satisfy partners," he said.

A spokesperson for DuckDuckGo characterized Eich's conclusion as misleading. "What Brendan seems to be referring to here is our ad clicks only, which is protected in our agreement with Microsoft as strictly non-profiling (private)," a company spokesperson told The Register in an email. "That is these ads are privacy protected and how he's framed it is ultimately misleading. Brendan, of course, kept the fact that our ads are private out and there is really nothing new here given everything has already been disclosed." In other words, allowing Bing to append its identifier to URLs enables Bing advertisers to tell whether their ad produced a click (a conversion), but not to target DuckDuckGo browser users based on behavior or identity.

DuckDuckGo's spokesperson pointed to Weinberg's attempt to address the controversy on Reddit and argued that DuckDuckGo provides very strong privacy protections. "This is talking about link tracking which no major browser protects against (see https://privacytests.org/), however we've started protecting against link tracking, and started with the primary offenders (Google and Facebook)," DuckDuckGo's spokesperson said. "To note, we are planning on expanding this to more companies, including Twitter, Microsoft, and more. We are not restricted from this and will be doing so."

Bitcoin

Finblox Imposes $1.5K Monthly Withdrawal Limit Amid Three Arrows Capital Uncertainty (coindesk.com) 62

Crypto staking and yield generation platform Finblox has imposed a $1,500 monthly withdrawal limit and paused rewards in light of uncertainty surrounding crypto hedge fund Three Arrows Capital, which made a $3.6 million investment in the Hong Kong-based platform last December. From a report: According to a statement shared on Twitter, Finblox has made the changes as it evaluates the impact of Three Arrow Capital's reported issues. It was reported on Wednesday that Three Arrows Capital is facing possible insolvency after incurring at least $400 million in liquidations.
Facebook

Nigeria's Internet Regulator Releases Draft To Regulate Google, Facebook, TikTok and Others (techcrunch.com) 28

Nigeria has announced plans to regulate internet companies like Facebook, WhatsApp, Instagram (all owned by Meta), Twitter, Google and TikTok in a draft shared by the country's internet regulator. From a report: This information, released by the National Information Technology Development Agency (NITDA) on Monday, can be viewed on its website and Twitter page. Just six months ago, Nigeria lifted the ban on Twitter, six months after it first declared a crackdown on the social media giant in the country. According to a memo written by Kashifu Inuwa Abdullahi, the director-general of NITDA to Nigeria's president, Muhammadu Buhari, at the time, one of the three conditions Twitter agreed to -- for its reinstatement -- was setting up "a legal entity in Nigeria during the first quarter of 2022." The others included paying taxes locally and cooperating with the Nigerian government to regulate content and harmful tweets. We're halfway through the year, and it appears that none of the conditions has been met yet. But that hasn't stopped the government from forging ahead to extend these requirements to other internet companies: Meta-owned platforms, Twitter and Google.

Slashdot Top Deals