• +598 29008192
  • info@servinfo.com.uy

Archivo del Autor: Belen De Leon

LA to become first US subway to install portable body scanners – CNET

Devices can detect metallic and non-metallic objects from 30 feet away.
Source: CNET

Why Saudi Arabia Would Want to Invest in Elon Musk and Tesla

Spending on electric cars is a great way to diversify an economy built on the back of Big Oil.
Source: Wired

Claves para automatizar las tareas de ciberseguridad de forma fiable

Cada vez más empresas incorporan inteligencia artificial en sus servicios de seguridad cibernética. Pero si no la aplican con cuidado, con prisas o simplemente por moda, sus clientes pueden ser aún más vulnerables a los hackers. Le ofrecemos algunos consejos para usar la IA correctamente
Source: MIT

This robot maintains tender, unnerving eye contact

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Source: TechCrunch

Steam survey shows PC gamers are still mostly playing in 1080p and lower

Valve Software’s latest hardware and software survey for July 2reveals that 63.72 percent of Steam’s registered members still play games with a 1080p resolution. Even more, only 1.14 percent are playing at a 4K resolution.

The post Steam survey shows PC gamers are still mostly playing in 1080p and lower appeared first on Digital Trends.

Source: Digital trends

Finding the Goldilocks zone for applied AI

While Elon Musk and Mark Zuckerberg debate the dangers of artificial general intelligence, startups applying AI to more narrowly defined problems such as accelerating the performance of sales teams and improving the operating efficiency of manufacturing lines are building billion-dollar businesses. Narrowly defining a problem, however, is only the first step to finding valuable business applications of AI.

To find the right opportunity around which to build an AI business, startups must apply the “Goldilocks principle” in several different dimensions to find the sweet spot that is “just right” to begin — not too far in one dimension, not too far in another. Here are some ways for aspiring startup founders to thread the needle with their AI strategy, based on what we’ve learned from working with thousands of AI startups.

 “Just right” prediction time horizons

Unlike pre-intelligence software, AI responds to the environment in which they operate; algorithms take in data and return an answer or prediction. Depending on the application, that prediction may describe an outcome in the near term, such as tomorrow’s weather, or an outcome many years in the future, such as whether a patient will develop cancer in 20 years. The time horizon of the algorithm’s prediction is critical to its usefulness and to whether it offers an opportunity to build defensibility.

Algorithms making predictions with long time horizons are difficult to evaluate and improve. For example, an algorithm may use the schedule of a contractor’s previous projects to predict that a particular construction project will fall six months behind schedule and go over budget by 20 percent. Until this new project is completed, the algorithm designer and end user can only tell whether the prediction is directionally correct — that is, whether the project is falling behind or costs are higher.

Even when the final project numbers end up very close to the predicted numbers, it will be difficult to complete the feedback loop and positively reinforce the algorithm. Many factors may influence complex systems like a construction project, making it difficult to A/B test the prediction to tease out the input variables from unknown confounding factors. The more complex the system, the longer it may take the algorithm to complete a reinforcement cycle, and the more difficult it becomes to precisely train the algorithm.

While many enterprise customers are open to piloting AI solutions, startups must be able to validate the algorithm’s performance in order to complete the sale. The most convincing way to validate an algorithm is by using the customer’s real-time data, but this approach may be difficult to achieve during a pilot. If the startup does get access to the customer’s data, the prediction time horizon should be short enough that the algorithm can be validated during the pilot period.

For most of AI history, slow computational speeds have severely limited the scope of applied AI.

Historic data, if it’s available, can serve as a stopgap to train an algorithm and temporarily validate it via backtesting. Training an algorithm making long time horizon predictions on historic data is risky because processes and environments are more likely to have changed the further back you dig into historic records, making historic data sets less descriptive of present-day conditions.

In other cases, while the historic data describing outcomes exists for you to train an algorithm, it may not capture the input variable under consideration. In the construction example, that could mean that you found out that sites using blue safety hats are more likely to complete projects on time, but since that hat color wasn’t previously helpful in managing projects, that information wasn’t recorded in the archival records. This data must be captured from scratch, which further delays your time to market.

Instead of making singular “hero” predictions with long time horizons, AI startups should build multiple algorithms making smaller, simpler predictions with short time horizons. Decomposing an environment into simpler subsystems or processes limits the number of inputs, making them easier to control for confounding factors. The BIM 360 Project IQ Team at Autodesk takes this small prediction approach to areas that contribute to construction project delays. Their models predict safety and score vendor and subcontractor quality/reliability, all of which can be measured while a project is ongoing.

Shorter time horizons make it easier for the algorithm engineer to monitor its change in performance and take action to quickly improve it, instead of being limited to backtesting on historic data. The shorter the time horizon, the shorter the algorithm’s feedback loop will be. As each cycle through the feedback incrementally compounds the algorithm’s performance, shorter feedback loops are better for building defensibility. 

“Just right” actionability window

Most algorithms model dynamic systems and return a prediction for a human to act on. Depending on how quickly the system is changing, the algorithm’s output may not remain valid for very long: the prediction may “decay” before the user can take action. In order to be useful to the end user, the algorithm must be designed to accommodate the limitations of computing and human speed. 

In a typical AI-human workflow, the human feeds input data into the algorithm, the algorithm runs calculations on that input data and returns an output that predicts a certain outcome or recommends a course of action; the human interprets that information to decide on a course of action, then takes action. The time it takes the algorithm to compute an answer and the time it takes for a human to act on the output are the two largest bottlenecks in this workflow. 

For most of AI history, slow computational speeds have severely limited the scope of applied AI. An algorithm’s prediction depends on the input data, and the input data represents a snapshot in time at the moment it was recorded. If the environment described by the data changes faster than the algorithm can compute the input data, by the time the algorithm completes its computations and returns a prediction, the prediction will only describe a moment in the past and will not be actionable. For example, the algorithm behind the music app Shazam may have needed several hours to identify a song after first “hearing” it using the computational power of a Windows 95 computer. 

The rise of cloud computing and the development of hardware specially optimized for AI computations has dramatically broadened the scope of areas where applied AI is actionable and affordable. While macro tech advancements can greatly advance applied AI, the algorithm is not totally held hostage to current limits of computation; reinforcement through training also can improve the algorithm’s response time. The more of the same example an algorithm encounters, the more quickly it can skip computations to arrive at a prediction. Thanks to advances in computation and reinforcement, today Shazam takes less than 15 seconds to identify a song. 

Automating the decision and action also could help users make use of predictions that decay too quickly to wait for humans to respond. Opsani is one such company using AI to make decisions that are too numerous and fast-moving for humans to make effectively. Unlike human DevOps, who can only move so fast to optimize performance based on recommendations from an algorithm, Opsani applies AI to both identify and automatically improve operations of applications and cloud infrastructure so its customers can enjoy dramatically better performance.

Not all applications of AI can be completely automated, however, if the perceived risk is too high for end users to accept, or if regulations mandate that humans must approve the decision. 

“Just right” performance minimums

Just like software startups launch when they have built a minimum viable product (MVP) in order to collect actionable feedback from initial customers, AI startups should launch when they reach the minimum algorithmic performance (MAP) required by early adopters, so that the algorithm can be trained on more diverse and fresh data sets and avoid becoming overfit to a training set.

Most applications don’t require 100 percent accuracy to be valuable. For example, a fraud detection algorithm may only immediately catch five percent of fraud cases within 24 hours of when they occur, but human fraud investigators catch 15 percent of fraud cases after a month of analysis. In this case, the MAP is zero, because the fraud detection algorithm could serve as a first filter in order to reduce the number of cases the human investigators must process. The startup can go to market immediately in order to secure access to the large volume of fraud data used for training their algorithm. Over time, the algorithms’ accuracy will improve and reduce the burden on human investigators, freeing them to focus on the most complex cases.

Startups building algorithms for zero or low MAP applications will be able to launch quickly, but may be continuously looking over their shoulder for copycats, if these copycats appear before the algorithm has reached a high level of performance. 

There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market.

Startups attacking low MAP problems also should watch out for problems that can be solved with near 100 percent accuracy with a very small training set, where the problem being modeled is relatively simple, with few dimensions to track and few possible variations in outcome.

AI-powered contract processing is a good example of an application where the algorithm’s performance plateaus quickly. There are thousands of contract types, but most of them share key fields: the parties involved, the items of value being exchanged, time frame, etc. Specific document types like mortgage applications or rental agreements are highly standardized in order to comply with regulation. Across multiple startups, we have seen algorithms that automatically process these documents needing only a few hundred examples to train to an acceptable degree of accuracy before additional examples do little to improve the algorithm, making it easy for new entrants to match incumbents and earlier entrants in performance.

AIs built for applications where human labor is inexpensive and able to easily achieve high accuracy may need to reach a higher MAP before they can find an early adopter. Tasks requiring fine motor skills, for example, have yet to be taken over by robots because human performance sets a very high MAP to overcome. When picking up an object, the AIs powering the robotic hand must gauge an object’s stiffness and weight with a high degree of accuracy, otherwise the hand will damage the object being handled. Humans can very accurately gauge these dimensions with almost no training. Startups attacking high MAP problems must invest more time and capital into acquiring enough data to reach MAP and launch. 

Threading the needle

Narrow AI can demonstrate impressive gains in a wide range of applications — in the research lab. Building a business around a narrow AI application, on the other hand, requires a new playbook. This process is heavily dependent on the specific use case on all dimensions, and the performance of the algorithm is merely one starting point. There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market, but we hope these ideas will provide a useful blueprint for you to begin.

Source: TechCrunch

Revcontent is trying to get rid of misinformation with help from the Poynter Institute

CEO John Lemp recently said that thanks to a new policy, publishers in Revcontent‘s content recommendation network “won’t ever make a cent” on false and misleading stories — at least, not from the network.

To achieve this, the company is relying on fact-checking provided by the Poynter Institute’s International Fact Checking Network. If any two independent fact checkers from International Fact Checking flag a story from the Revcontent network as false, the company’s widget will be removed, and Revcontent will not pay out any money on that story (not even revenue earned before the story was flagged).

In some ways, Revcontent’s approach to fighting fake news and misinformation sounds similar to the big social media companies — Lemp, like Twitter, has said his company cannot be the “arbiter of truth,” and like Facebook, he’s emphasizing the need to remove the financial incentives for posting sensationalistic-but-misleading stories.

However, Lemp (who’s spoken in the past about using content recommendations to reduce publishers’ reliance on individual platforms) criticized the big internet companies for “arbitrarily” taking down content in response to “bad PR.” In contrast, he said Revcontent will have a fully transparent approach, one that removes the financial rewards for fake news without silencing anyone.

Lemp didn’t mention any specific takedowns, but the big story these days is Infowars. It seems like nearly everyone has been cracking down on Alex Jones’ far-right, conspiracy-mongering site, removing at least some Infowars-related accounts and content in the past couple of weeks.

The Infowars story also raises the question of whether you can effectively fight fake news on a story-by-story basis, rather than completely cutting off publishers when they’ve shown themselves to consistently post misleading or falsified stories.

When asked about this, Lemp said Revcontent also has the option to completely removing publishers from the network, but he said he views that as a “last resort.”

Source: TechCrunch

‘Unhackable’ BitFi crypto wallet has been hacked

The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device – essentially an Android-based mini tablet – would withstand any attack. Spoiler alert: it couldn’t.

First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It consisted of a device that McAfee claimed contained no software or storage and was instead a standalone wallet similar to the Trezor. The website featured a bold claim by McAfee himself, one that would give a normal security researcher pause:

Further, the company offered a bug bounty that seems to be slowly being eroded by outside forces. They asked hackers to pull coins off of a specially prepared $10 wallet, a move that is uncommon in the world of bug bounties. They wrote:

We deposit coins into a Bitfi wallet
If you wish to participate in the bounty program, you will purchase a Bitfi wallet that is preloaded with coins for just an additional $10 (the reason for the charge is because we need to ensure serious inquiries only)
If you successfully extract the coins and empty the wallet, this would be considered a successful hack
You can then keep the coins and Bitfi will make a payment to you of $250,000
Please note that we grant anyone who participates in this bounty permission to use all possible attack vectors, including our servers, nodes, and our infrastructure

Hackers began attacking the device immediately, eventually hacking it to find the passphrase used to move crypto in and out of the the wallet. In a detailed set of tweets, security researchers Andrew Tierney and Alan Woodward began finding holes by attacking the operating system itself. However, this did not match the bounty to the letter, claimed BitFi, even though they did not actually ship any bounty-ready devices.

Then, to add insult to injury, the company earned a Pwnies award at security conference Defcon. The award was given for worst vendor response. As hackers began dismantling the device, BitFi went on the defensive, consistently claiming that their device was secure. And the hackers had a field day. One hacker, 15-year-old Saleem Rashid, was able to play Doom on the device.

The hacks kept coming. McAfee, for his part, kept refusing to accept the hacks as genuine.

Unfortunately, the latest hack may have just fulfilled all of BitFi’s requirements. Rashid and Tierney have been able to pull cash out of the wallet by hacking the passphrase, a primary requirement for the bounty. “We have sent the seed and phrase from the device to another server, it just gets sent using netcat, nothing fancy.” Tierney said. “We believe all conditions have been met.”

The end state of this crypto mess? BitFi did what most hacked crypto companies do: double down on the threats. In a recently deleted Tweet they made it clear that they were not to be messed with:

The researchers, however, may still have the last laugh.

Source: TechCrunch

Bird and Lime are protesting Santa Monica’s electric scooter recommendations

Lime and Bird are protesting recommendations in Santa Monica, Calif. that would prevent the electric scooter companies from operating in the Southern California city. We first saw the news over on Curbed LA, which reported both Lime and Bird are temporarily halting their services in Santa Monica.

Last week, Santa Monica’s shared mobility device selection committee recommended the city move forward with Lyft and Uber-owned Jump as the two exclusive scooter operators in the city during the upcoming 16-month pilot program. The committee ranked Lyft and Jump highest due to their experience in the transportation space, staffing strategy, commitments to diversity and equity, fleet maintenance strategies and other elements. Similarly, the committee recommended both Lyft and Jump as bike-share providers in the city.

Now, both Bird and Lime are asking their respective riders to speak out against the recommendations. Bird, which first launched in Santa Monica, has also emailed riders, asking them to tell the city council that they want to Bird to stay.

“In a closed-door meeting, a small city-appointed selection committee decided to recommend banning Bird from your city beginning in September,” Bird wrote in an email. “This group inexplicably scored companies with no experience ever operating shared e-scooters higher than Bird who invented this model right here in Santa Monica.”

Bird goes on to throw shade at Uber and Lyft — neither of which have operated electric scooter services before. That shade is entirely fair, but one could argue both Uber and Lyft already have so much experience operating transportation services within cities and would be better equipped to run an electric scooter service than a newer company.

In addition to asking people to contact their city officials, Bird is hosting a rally later today at Santa Monica City hall. But given that most of these electric scooters are manufactured by the same provider and that the services are essentially the same, I’d be surprised if there’s much brand loyalty. Over in San Francisco, I personally miss having electric scooters but I really don’t give a rat’s pajamas which services receive permits. That’s just to say, we’ll see if these efforts are effective.

I’ve reached out to both Lime and Bird and will update this story if I hear back.

Source: TechCrunch

The browser-based Monero miner Coinhive generates around $250,000 each month

Despite a fall in cryptocurrency mining, the Coinhive Monero miner is still highly active, generating around $250,000 each month. Coinhive also contributes 1.18 percent of the total mining power behind the Monero blockchain.

The post The browser-based Monero miner Coinhive generates around $250,000 each month appeared first on Digital Trends.

Source: Digital trends