• +598 29008192
  • info@servinfo.com.uy

Archivo del Autor: Belen De Leon

Ralph Breaks the Internet review: A sweet ride through web culture – CNET

Disney’s Wreck-It Ralph sequel is a hilarious story about friendship that just happens to take place online.
Source: CNET

La batería del futuro que podría lograr que los aviones no contaminen

Estos dos investigadores trabajan para que el sector de la aviación reduzca sus emisiones de gases de efecto invernadero. Su prototipo, específicamente diseñado para aviones, aprovecha trucos magnéticos y de configuración para aumentar su rendimiento, pero aún tiene muchos desafíos por delante
Source: MIT

Surface Pro 6 passes YouTuber’s bend tests, won’t snap in half like iPad Pro

In his testing, popular YouTuber Zack Nelson, better known as JerryRigEverything, finds that the Surface Pro 6 can withstand most stress tests and does not snap in half like the iPad Pro. 

The post Surface Pro 6 passes YouTuber’s bend tests, won’t snap in half like iPad Pro appeared first on Digital Trends.

Source: Digital trends

They’re making a real HAL 9000, and it’s called CASE

Don’t panic! Life imitates art, to be sure, but hopefully the researchers in charge of the Cognitive Architecture for Space Exploration, or CASE, have taken the right lessons from 2001: A Space Odyssey, and their AI won’t kill us all and/or expose us to alien artifacts so we enter a state of cosmic nirvana. (I think that’s what happened.)

CASE is primarily the work of Pete Bonasso, who has been working in AI and robotics for decades — since well before the current vogue of virtual assistants and natural language processing. It’s easy to forget these days that research in this area goes back to the middle of the century, with a boom in the ’80s and ’90s as computing and robotics began to proliferate.

The question is how to intelligently monitor and administrate a complicated environment like that of a space station, crewed spaceship or a colony on the surface of the Moon or Mars. A simple question with an answer that has been evolving for decades; the International Space Station (which just turned 20) has complex systems governing it and has grown more complex over time — but it’s far from the HAL 9000 that we all think of, and which inspired Bonasso to begin with.

“When people ask me what I am working on, the easiest thing to say is, ‘I am building HAL 9000,’ ” he wrote in a piece published today in the journal Science Robotics. Currently that work is being done under the auspices of TRACLabs, a research outfit in Houston.

One of the many challenges of this project is marrying the various layers of awareness and activity together. It may be, for example, that a robot arm needs to move something on the outside of the habitat. Meanwhile someone may also want to initiate a video call with another part of the colony. There’s no reason for one single system to encompass command and control methods for robotics and a VOIP stack — yet at some point these responsibilities should be known and understood by some overarching agent.

CASE, therefore, isn’t some kind of mega-intelligent know-it-all AI, but an architecture for organizing systems and agents that is itself an intelligent agent. As Bonasso describes in his piece, and as is documented more thoroughly elsewhere, CASE is composed of several “layers” that govern control, routine activities and planning. A voice interaction system translates human-language queries or commands into tasks those layers can carry out. But it’s the “ontology” system that’s the most important.

Any AI expected to manage a spaceship or colony has to have an intuitive understanding of the people, objects and processes that make it up. At a basic level, for instance, that might mean knowing that if there’s no one in a room, the lights can turn off to save power but it can’t be depressurized. Or if someone moves a rover from its bay to park it by a solar panel, the AI has to understand that it’s gone, how to describe where it is and how to plan around its absence.

This type of common sense logic is deceptively difficult and is one of the major problems being tackled in AI today. We have years to learn cause and effect, to gather and put together visual clues to create a map of the world and so on — for robots and AI, it has to be created from scratch (and they’re not good at improvising). But CASE is working on fitting the pieces together.

Screen showing another ontology system from TRACLabs, PRONTOE.

“For example,” Bonasso writes, “the user could say, ‘Send the rover to the vehicle bay,’ and CASE would respond, ‘There are two rovers. Rover1 is charging a battery. Shall I send Rover2?’ Alas, if you say, ‘Open the pod bay doors, CASE’ (assuming there are pod bay doors in the habitat), unlike HAL, it will respond, ‘Certainly, Dave,’ because we have no plans to program paranoia into the system.”

I’m not sure why he had to write “alas” — our love of cinema is exceeded by our will to live, surely.

That won’t be a problem for some time to come, of course — CASE is still very much a work in progress.

“We have demonstrated it to manage a simulated base for about 4 hours, but much needs to be done for it to run an actual base,” Bonasso writes. “We are working with what NASA calls analogs, places where humans get together and pretend they are living on a distant planet or the moon. We hope to slowly, piece by piece, work CASE into one or more analogs to determine its value for future space expeditions.”

I’ve asked Bonasso for some more details and will update this post if I hear back.

Whether a CASE- or HAL-like AI will ever be in charge of a base is almost not a question any more — in a way it’s the only reasonable way to manage what will certainly be an immensely complex system of systems. But for obvious reasons it needs to be developed from scratch with an emphasis on safety, reliability… and sanity.

Source: TechCrunch

Rowhammer Data Hacks Are More Dangerous Than Anyone Feared

Researchers have discovered that the so-called Rowhammer technique works on “error-correcting code” memory, in what amounts to a serious escalation.
Source: Wired

LinkedIn cuts off email address exports with new privacy setting

A win for privacy on LinkedIn could be a big loss for businesses, recruiters and anyone else expecting to be able to export the email addresses of their connections. LinkedIn just quietly introduced a new privacy setting that defaults to blocking other users from exporting your email address. That could prevent some spam, and protect users who didn’t realize anyone who they’re connected to could download their email address into a giant spreadsheet. But the launch of this new setting without warning or even a formal announcement could piss off users who’d invested tons of time into the professional networking site in hopes of contacting their connections outside of it.

TechCrunch was tipped off by a reader that emails were no longer coming through as part of LinkedIn’s Archive tool for exporting your data. Now LinkedIn confirms to TechCrunch that “This is a new setting that gives our members even more control of their email address on LinkedIn. If you take a look at the setting titled ‘Who can download your email’, you’ll see we’ve added a more detailed setting that defaults to the strongest privacy option. Members can choose to change that setting based on their preference. This gives our members control over who can download their email address via a data export.”

That new option can be found under Settings & Privacy -> Privacy -> Who Can See My Email Address? This “Allow your connections to download your email [address of user] in their data export?” toggle defaults to “No.” Most users don’t know it exists because LinkedIn didn’t announce it; there’s merely been a folded up section added to the Help center on email visibility, and few might voluntarily change it to “Yes” as there’s no explanation of why you’d want to. That means nearly no one’s email addresses will appear in LinkedIn Archive exports any more. Your connections will still be able to see your email address if they navigate to your profile, but they can’t grab those from their whole graph.

Facebook came to the same conclusion about restricting email exports back when it was in a data portability fight with Google in 2010. Facebook had been encouraging users to import their Gmail contacts, but refused to let users export their Friends’ email addresses. It argued that users own their own email addresses, but not those of their Friends, so they couldn’t be downloaded — though that stance conveniently prevented any other app from bootstrapping a competing social graph by importing your Facebook friend list in any usable way. I’ve argued that Facebook needs to make friend lists interoperable to give users choice about what apps they use, both because it’s the right thing to do but also because it could deter regulation.

On a social network like Facebook, barring email exports makes more sense. But on LinkedIn’s professional network, where people are purposefully connecting with those they don’t know, and where exporting has always been allowed, making the change silently seems surreptitious. Perhaps LinkedIn didn’t want to bring attention to the fact it was allowing your email address to be slurped up by anyone you’re connected with, given the current media climate of intense scrutiny regarding privacy in social tech. But trying to hide a change that’s massively impactful to businesses that rely on LinkedIn could erode the trust of its core users.

Source: TechCrunch

Facebook is still facing ‘intermittent’ outages for advertisers ahead of Black Friday and Cyber Monday

One day after experiencing a massive outage across its ad network, Facebook, one of the most important online advertising platforms, is still seeing “intermittent” issues for its ad products at one of the most critical times of the year for advertisers.

According to a spokesperson for the company, while most systems are restored there are still intermittent issues that could affect advertisers.

For most of the day yesterday, advertisers were unable to create and edit campaigns through Ads Manager or the Ads API tools.

The company said that existing ads were delivered, but advertisers could not set new campaigns or make any changes to existing campaigns, according to several users of the network.

Reporting has been restored for all interfaces, according to the company, but conversion data may be delayed throughout the day for the Americas and in the evening for other regions.

The company declined to comment on how many campaigns were affected by the outage or on whether it intends to compensate or make up for the outage with advertisers on the platform.

Some advertisers are still experiencing outages and are not happy about it.

This is a bad look for a company that is already fighting fires on any number of other fronts. But unlike the problems with bullying, hate speech, and disinformation that don’t impact the ways Facebook makes money, selling ads is actually how Facebook makes money.

In the busiest shopping season of the year (and therefore one of the busiest advertising seasons of the year) for Facebook to have no response and for some developers to still be facing intermittent outages on the platform is a bad sign.

Source: TechCrunch

Apple puts its next generation of AI into sharper focus as it picks up Silk Labs

Apple’s HomePod is a distant third behind Amazon and Google when it comes to market share for smart speakers that double up as home hubs, with less than 5 percent share of the market for these devices in the US, according to one recent survey. And its flagship personal assistant Siri has also been determined to lag behind Google when it comes to comprehension and precision. But there are signs that the company is intent on doubling down on AI, putting it at the center of its next generation of products, and it’s using acquisitions to help it do so.

The Information reports that Apple has quietly acquired Silk Labs, a startup based out of San Francisco that had worked on AI-based personal assistant technology both for home hubs and mobile devices.

There are two notable things about Silk’s platform that set it apart from that of other assistants: it was able to modify its behaviour as it learned more about its users over time (both using sound and vision); and it was designed to work on-device — a nod to privacy and concerns about “always on” speakers listening to you; improved processing on devices; and the constraints of the cloud and networking technology.

Apple has not returned requests for comment, but we’ve found that at least some of Silk Labs’ employees appear already to be working for Apple (LinkedIn lists nine employees for Silk Labs, all with engineering backgrounds).

That means it’s not clear if this is a full acquisition or an aqui-hire — as we learn more we will update this post — but bringing on the team (and potentially the technology) speaks to Apple’s need and interest in doubling down to build products that are not mere repeats of what we already have on the market.

Silk Labs first emerged in February 2016, the brainchild of Adreas Gal, the former CTO of Mozilla, who had also created the company’s ill-fated mobile platform, Firefox OS; and Michael Vines, who came from Qualcomm. (Vines, incidentally, moved on in June 2018 to become the principal engineer for a blockchain startup, Solana.)

Its first product was originally conceived as integrated software and hardware: the company raised just under $165,000 in a Kickstarter to build and ship Sense, a smart speaker that would provide a way to control connected home devices and answer questions, and — with a camera integrated into the device — be able to monitor rooms and learn to recognise people and their actions.

Just four months later, Silk Labs announced that it would shelve the Sense hardware to focus specifically on the software, called Silk, after it said it started to receive inquiries from OEMs interested in getting a version of the platform to run on their own devices (it also raised money outside of Kickstarter, around $4 million).

Potentially, Silk could give those OEMs a way of differentiating from the plethora of devices that are already on the market. In addition to products from the likes of Google and Amazon, there are also a number of speakers powered by those assistants, along with devices using Cortana from Microsoft.

When Silk Labs announced that it was halting hardware development, it noted that it was in talks for some commercial partnerships (while at the same time open sourcing a basic version of the Silk platform for creating communications with IoT devices).

Silk Labs never disclosed the names of those partners, but buying and shutting down the company would be one way of making sure that the technology stays with just one company.

It’s temping to match up what Silk Labs has built up to now specifically with Apple’s efforts specifically in its own smart speaker, the HomePod.

Specifically, it could provide it with a smarter engine that learns about users, operates even if internet is down, and secures user privacy, and crucially becomes a lynchpin for how you might operate everything else in your connected life.

That would make for a mix of features that would clearly separate it from the market leader of the moment, and play into aspects — specifically privacy — that people are increasingly starting to value more.

But if you consider that spectrum of hardware and services that Apple is now involved in, you can see that the Silk team, and potentially the IP, may end up having also a wider impact.

Apple has had a mixed run when it comes to AI. The company was an early mover when it first put its Siri voice assistant into its iPhone 4S in 2011, and for a long time people would always mention it in conjunction with Amazon and Google (less so Microsoft) when they would lament about how a select few technology companies were snapping up all the AI talent, leaving little room for other companies to get a look in to building products or having a stake in how it was being developed on a larger scale.

More recently, though, it appears that the likes of Amazon — with its Alexa-powered portfolio of devices — and Google have stolen a march when it comes to consumer products built with AI technologies at their core, and as their primary interface with their users. (Siri, if anything, sometimes feels like a nuisance when you accidentally call it into action by pressing the Touch Bar or the home button on my older model iPhone.)

But it’s almost certainly wrong to guess Apple — one of the world’s biggest companies, known for playing its hand close to its chest — has lost its way in this area.

There have been a few indications, though, that it’s getting serious and rethinking how it is doing things.

A few months ago, it reorganized its AI teams under ex-Googler John Giannandrea, losing some talent in the process but more significantly setting the pace for how its Siri and Core ML teams would work together and across different projects at the company, from developer tools to mapping and more. 

Apple has also made dozens of smaller and bigger acquisitions in the last several years that speak to it picking up more talent and IP in the quest to build out its AI muscle across different areas, from augmented reality and computer vision through to big data processing at the back end. It’s even acquired other startups, such as VocalIQ in England, that focus on voice interfaces and ‘learn’ from interactions.

To be sure, the company has started to see a deceleration of iPhone unit sales (if not revenues: prices are higher than ever), and that will mean a focus newer devices, and ever more weight put on the services that run on these devices. Services can be augmented and expanded, and they represent recurring income — two big reasons why Apple will shift to putting more investment into them.

Expect to see that AI net covering not just the iPhone, but computers, Apple’s smart watch, its own smart speaker, the HomePod, Apple Music, Health and your whole digital life.

Source: TechCrunch

Driven to safety — it’s time to pool our data

For most Americans, the thought of cars autonomously navigating our streets still feels like a science fiction story. Despite the billions of dollars invested into the industry in recent years, no self-driving car company has proven that its technology is capable of producing mass-market autonomous vehicles in even the near-distant future.

In fact, a recent IIHS investigationidentified significant flaws in assisted driving technology and concluded that in all likelihood “autonomous vehicle[s] that can go anywhere, anytime” will not be market-ready for “quite some time.” The complexity of the problem has even led Uber to potentially spin off their autonomous car unit as a means of soliciting minority investments — in short, the cost of solving this problem is time and billions (if not trillions) of dollars.

Current shortcomings aside, there is a legitimate need for self-driving technology: every year, nearly 1.3 million people die and 2 million people are injured in car crashes. In the U.S. alone, 40,000 people died last year due to car accidents, putting car accident-based deaths in the top 15 leading causes of death in America. GM has determined that the major cause for 94 percent of those car crashes is human error. Independent studies have verified that technological advances such as ridesharing have reduced automotive accidents by removing from our streets drivers who should not be operating vehicles.

The challenge of developing self-driving technology is rooted in replicating the incredibly nuanced cognitive decisions we make every time we get behind the wheel.

We should have every reason to believe that autonomous driving systems — determinant and finely tuned computers always operating at peak performance — will all but eliminate on-road fatalities. The challenge of developing self-driving technology is rooted in replicating the incredibly nuanced cognitive decisions we make every time we get behind the wheel.

Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together.

The competition generated from a level data playing field could create tens of thousands of new high-tech jobs.

The siloed petabytes (and soon exabytes) of road data that these companies hoard should be, without giving away trade secrets or information about their models, pooled into a nonprofit consortium, perhaps even a government entity, where every mile driven is shared and audited for quality. By all means, take this data to your private company and consume it, make your models smarter and then provide more road data to the pool to make everyone smarter — and more importantly, increase the pace at which we have truly autonomous vehicles on the road, and their safety once they’re there.

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition.

Though big technology and automotive companies may scoff at the idea of sharing their data, the competition generated from a level data playing field could create tens of thousands of new high-tech jobs. Any government dollar spent on aggregating road data would be considered capitalized as opposed to lost — public data sets can be reused by researchers for AI and cross-disciplinary projects for many years to come.

The most ethical (and most economically sensible) choice is that all data generated by autonomous vehicle companies should be part of a contiguous system built to make for a smarter, safer humanity. We can’t afford to wait any longer.

Source: TechCrunch

You Won't Win the Thanksgiving Fight. But You Can Survive

The deep conflicts dividing America will never be solved over a turkey leg. But there are science-backed ways to survive family arguments.
Source: Wired