Pages

Wednesday 9 August 2017

The 5 Technologies We Need to Change the World


I just finished reading an interesting hard science fiction book called The Punch Escrow, by Tal M. Klein (a movie is in the works).
What makes the difference between hard and soft science fiction is that hard science fiction is based on science, while soft is, let's just say, far more imaginative. To be honest, I enjoy both types, and the soft stuff is a ton easier to write. Those pesky physical rules don't get in the way, and you don't have to do research.
The story takes place several decades in the future, and it revolves around the idea of quantum foam and teleportation. It points out why teleportation never may be practical, but it brings up the idea of human 3D printing, which could be used more effectively for space exploration.
However, it also would have a massive number of other uses, both good and bad, which got me thinking about what else could change our future in a massive way. I came up with a list of five potentially world-changing technologies.
I'll close with my product of the week: a book on management that could have a massive effect on your company's success, based on the black boxes used in airplanes. It's called Black Box Thinking.

Technology 1: Organic Printing

We can use 3D printers for plastics, ceramics, metals and some blends, but our efforts even to print food have been more in line with automated icing machines for cakesthan printing food.
If we could print food affordably using nonperishable components, it would mean not only that we would be better able to address the massive amount of global hunger that exists, but also that we potentially could cut the cost of food manufacturing and eliminate most food-borne illnesses.
There is an amazing amount of activity in this area, suggesting that by 2030 we actually might have something like the Star Trek replicator in our homes.
Given that this same technology likely could manufacture drugs and better prosthetics, this single step could have a massive impact on how we live -- far beyond the way we eat.

Technology 2: Advanced Bio-engineering

A division of Google is releasing millions of bio-engineered mosquitoes to eliminate those that carry sicknesses. Granted, I do remember that many apocalyptic movies start this way.
The ability to manufacture insects that can address certain problems could have a massive impact, good and bad, on our environment. The bad would come from a mistake, or if someone decided to create militarized mosquitoes.
In the world of The Punch Escrow, there are mosquitoes that have been engineered to eat pollutants in the air and pee H20 -- and characters have to dodge constant pee drenchings from the mosquitoes.
Still, bio-engineered life forms could offset much of the damage we've done to the world -- addressing global warming as well as land, sea and air pollution -- and go places that people currently are unable to go.

Technology 3: AI Salting

Artificial intelligence salting is another concept author Klein introduces as a major plot element in The Punch Escrow. AI salting isn't meal preparation, for when we humans eat AIs (boy, talk about a concept that could start a Terminator event) it means a specialized technician teaches an AI to think more like a human.
Basically, it is individual AI deep learning of human behaviors. The underlying concept, making computers think more like humans, is critical to make them more effective at interacting with humans and interfacing with us more effectively.
If we really can't tell the difference between an AI and a human, or if an AI handling a human-related task could be made to be empathetic, the improvement in the interaction and the effectiveness of the AI would be improved vastly.
However, few are focused on the human part, and the challenge to train AIs to be more human-like could change forever the way we interact with and use them. At the very least, it would be a huge step in creating robots indistinguishable from humans and making the Westworld experience real.

Technology 4: Ultracapacitor Batteries

As Elon Musk repeatedly has said, batteries suck. Ultracapacitors can be charged and discharged almost instantly. They don't have the level of temperature problems that batteries currently exhibit. They are much lighter, which increases efficiency in things like cars, and their life cycle is vastly longer than current batteries.
The problem is, they don't do a good job of storing energy for any length of time. Some recent promising news from the scientific community suggests we may be close to sorting this out.
Batteries that could charge instantly and produce far more energy without problems would be a huge step toward making off-grid home power and electric-powered cars far more convenient.

Technology 5: Wireless Power

Ever since Nikola Tesla started talking about being able to broadcast power, it has been a known game-changer. Granted, Tesla may have gotten his ideas from aliens, but if you don't need batteries, then electric cars, planes, trains and personal electronics become smaller and far more reliable.
Qualcomm is working on a technology called "Halo", initially to charge electric cars without having to plug them in. However, its vision includes putting this technology in roads so that you'd never have to charge your car again -- it would charge while you were driving.
Rather than replacing a gas pump with a far slower charging station, you would just get rid of it. While not as good as true broadcast power, technology like this could work in cars, planes and offices, and we would never have to worry about charging our personal stuff or cars ever again.
A similar technology from WiTricity is being used to develop wireless charging for all our devices and currently being built into Dell's laptop charging docks.

Wrapping Up

Put these technologies together, and we'd have our food coming to us anyplace in any form and at any time we wanted. We'd have bugs making the world a better place to live.
AIs would be our friends -- not the problem Elon Musk is envisioning (though I kind of question his idea that government should fix this, given how bad it is at fixing things), or they'd just be much better at "taking care" of us -- but not in a good way.
Finally, if we can get better energy storage and distribution, we end up in a far more reliable and less-polluted world, coming damn close to a future Utopia. Though, as The Punch Escrow points out, if we can't fix ourselves, the result still could be pretty nasty.
Just think of the implications of printing people... As the only sure thing about the future is that it will be very different than the world of today, here's hoping that is a good thing.

Summer is the time I get caught up on my reading, and after reading The Punch Escrow, I moved to another recommended book that is far more practical. Black Box Thinking is based largely around comparing the healthcare industry to the airline industry, and pointing out that airlines have become massively safer over the years. However, hospitals may be the third biggest killer of people, largely because airlines have black boxes.
The reason this hits home for me is that it points to hospitals as places where errors are covered up aggressively to avoid liability. Black boxes, which capture errors but can't be used in litigation, are used to determine fault -- not to assign blame, but to ensure that the mistake never happens again. This one practice has helped transform air travel from one of the least safe ways to travel to one of the safest.
The big takeaway is that if you and your company can focus more on mistakes as learning opportunities and on ensuring that they are one-time events, rather than focusing on shooting the poor sap who made the mistake, which is much more typical, you'll end up not only with a far less hostile working environment, but also a far more successful company.
One of my big personal concerns is that we'll transfer this process of blame and covering up mistakes to our coming wave of ever-more-intelligent machines, which could speed up the related problems to machine speed. I doubt we'd survive that.
So, a book that makes workplace environments better, companies more successful, and humans more likely to survive is worth reading, I think, and it's my product of the week. 

CoreOS, OCI Unveil Controversial Open Container Industry Standard


CoreOS and the Open Container Initiative on Wednesday introduced image and runtime specifications largely based on Docker's image format technology.
However, OCI's decision to model the standard on Docker's de facto platform has raised questions. Some critics have argued for other options.
Version 1.0 provides a stable standard for application containers, according to Brandon Philips, CTO at CoreOS and chair of the OCI Technical Oversight Board.
Having a standard created by industry leaders should spur OCI partners to develop further standards and innovation, he said.
Reaching the 1.0 mark means that the OCI Runtime Spec and the OCI Image Format Spec now are ready for broad use. Further, this achievement will push the OCI community to help stabilize a growing market of interoperable pluggable tools, Philips added.
The industry-supported standards also will provide a sense of confidence that containers are here to stay, he said, and that Kubernetes users can expect future support.
"The outcome is really good. The certification process is under way now," Philips told LinuxInsider.

Collaboration Challenges

Open standards are key to the success of the container ecosystem, said Philips, and the best way to achieve standards is by working closely with the community. However, reaching agreement on version 1.0 was more time consuming than expected.
"Early on, the biggest challenge was coming to terms with the model of how the project releases would work and how to get the project off the ground," Philips recalled. "Everyone underestimated how much time that would take."
Coalition members dealt with mismatched expectations about what they wanted to do, he said, but in the last year or so, the group got the referencing expectations done and more testing came through.

Quest for Standards

CoreOS officials began discussing the idea for an industry-approved open standard for the container image and runtime specifications several years ago. That early quest led to the realization that agreeing on a standard image format was critical, Phillips said.
CoreOS and container technology creator Dockerannounced OCI's formation in June 2015. The coalition started with 21 industry leaders forming the Open Container Project (OCP) as a non-profit organization seeking minimal common standards for software containers for cloud storage.
The coalition includes leaders in the container industry -- among them, Docker, Microsoft, Red Hat, IBM, Google and The Linux Foundation.
OCI's goal is to give high confidence to application developers that the software deployed in their containers will continue to work when newer specifications come out and people develop new tools. That confidence must be met for both proprietary and open source software.
It does not matter if the tools and applications are proprietary or open source. With the specifications in place, the products can be designed to work with any container configuration, Philips said.
"You need a conscious effort to create standards outside of people writing code. It is a separate effort," he added.
As part of the coalition, Docker donated its de facto image format standard technology to the OCP.
It included the company's container format, runtime code and specifications. Work on creating an Open Container Initiative Image Specification began last year.
The standards milestone gives container users the capability to develop, package and sign application containers. They also can run the containers in a variety of container engines, noted Philips.

A Choice of One?

The coalition faced two ways to pursue open standards, observed Charles King, principal analyst at Pund-IT. The first option was to gather like-minded people to hash out differences and build standards from scratch.
The coalition members seemed to settle for the second option, which involved adopting a powerful, market leading platform as an effective standard, he said.
"Docker's contributions to The Linux Foundation put the OCI firmly on the second path -- but those who are concerned about Docker's approach or its market position may feel there are better options," King told LinuxInsider.
In fact, one OCI member -- CoreOS -- leveled some strong criticism of the group's general direction at the beginning of the effort, he said, "so it will be interesting to see how V1.0 does/doesn't address those concerns."

Faster Path

Docker's widely deployed runtime implementation is a suitable foundation for building an open standard. It already was a defacto standard, according to David Linthicum, senior vice president at Cloud Technology Partners.
"It's also important that we get this working for us quickly. The waves of standards meetings, dealing with politics and things such as that, just waste time," he told LinuxInsider.
Right now, though, there are no better options, Linthicum added.
The runtime Docker uses is runC, which is an implementation of the OCI runtime standard, according to Joe Brockmeier, senior evangelist for Linux Containers at Red Hat.
"So, runC is a suitable foundation for a runtime standard, yes. It is broadly accepted and forms the basis for most container implementations today," he told LinuxInsider.
OCI is far more than Docker. While Docker did commit the underlying code from which the OCI specification is derived, the lineage stops there, said Brockmeier, and no truly viable alternatives exist.

Docking the Question

Adopting an industry-wide standard likely will simplify and speed container adoption and management for many companies, suggested Pund-IT's King. It also is likely that some key vendors will continue to focus on their own proprietary container technologies.
"They'll argue that theirs is a superior path -- but that will effectively prevent the OCI from achieving market-wide leadership," he said. "Starting out with a standard that's more or less complete, as OCI has, may not perfectly please everyone, but it's likely to move forward to completion more quickly and effectively than other options."
Containers have standardized deployment to cloud, with Docker clearly leading the way, said Marko Anastasov, cofounder of Semaphore.
Docker's de facto standard container does represent the best foundation for developing an open standard, he said.
"How Docker's commercial interests will influence the scale of its involvement in OCI remains to be seen," he told LinuxInsider.

Opposing Viewpoint

An open standard is not the end-all for adopting more containers in cloud deployment, contended Nic Cheneweth, principal consultant with ThoughtWorks. A better approach is to look at the impact of the server virtualization segment of the IT industry.
"The principal driver for continued growth and widespread adoption was not in the statement of an industry standard but in the potential and realized efficiencies obtained by use of any of the competing technologies, such as VMware, Xen, etc.," Cheneweth told LinuxInsider.
Aspects of container technology, such as the container itself, lend themselves to definition of a standard. Until then, healthy competition guided by deep open source software involvement will contribute to be a better standard, he said.
A standard around the orchestration of containers is not particularly important to the continued growth of the space, according to Cheneweth.
However, if the industry insists on locking into a de facto container standard, the model OCI chose is a good starting point, he said. "I don't know that better options are available, but certainly worse ones exist."

Microsoft Rolls Out Linux Support in SQL Server 2017 Release Candidate


Microsoft on Monday announced the availability of its first public release candidate for SQL Server 2017, which includes full support for Windows, Linux and Docker containers.
SQL Server on Linux improves on earlier previews with several key enhancements, including active directory authentication; transport layer security to encrypt data; and SQL Server Integration Services that add support for Unicode ODBC drivers.
SQL Server 2017 has demonstrated faster performance than competitive databases or older SQL Server versions with new benchmarks, Microsoft said, including the world record TPC-H 1-TB non-clustered data warehousing benchmark achieved this spring using SQL Server 2017 on Red Hat Enterprise Linux and HPE DL380 Gen 9 hardware.
Among early adopters are Dv01, a financial technology startup, which began using an open source database on a rival's cloud, but got 15 times faster performance on SQL Server 2017, according to Microsoft.
Another customer, Convergent Computing, has moved some Tier 2 applications to inexpensive, white box servers using SQL Server 2017 on Linux.
Convergent started fiddling with SQL on Linux last year, said Rand Morimoto, president of the company.
After evaluating the initial performance of the new platform, Convergent added applications as Microsoft added functionality, he told LinuxInsider.
Convergent now has half a dozen business applications -- including financial systems data analysis and client communications management systems -- running successfully on Linux, Morimoto said.
The company can get the same performance while running these SQL/Linux instances on lower power, he said, allowing it to recapture resources allocated to other applications.
"We're anticipating over the next 24 months that will translate to a decrease in costs of at least 34 percent," Morimoto said. "As we scale our SQL instances using SQL/Linux, we feel we can effectively increase capacity by 40 to 50 percent without having to allocate more resources."

Advantage Azure?

The move is the latest indication of Microsoft's growing warmth toward open source. CEO Satya Nadella has shifted the company's philosophy away from competition and toward cooperation.
"It recognizes that forcing customers onto Windows servers isn't a good long-term play for a couple of reasons," noted Rebecca Wettemann, vice president of research at Nucleus Research.
For starters, Microsoft is laser-focused on bringing customers to Azure, making the desire to fight the Windows server battle less important, she told LinuxInsider.
"Windows is still a dominant player in the enterprise server space," Wettemann acknowledged, but "with the growth of people writing for Linux, and the growth of Linux servers in the enterprise, Microsoft recognizes that customers want to run their database of choice on Linux -- so opening SQL to Linux puts Microsoft into the hands of more potential users."
The release seemed timed to coincide with last week's announcement of Microsoft's Azure Stack, which likely will have a huge impact on the private cloud market, noted Paul Teich, principal analyst at Tirias Research.
The availability of SQL Server on Linux will ensure that Microsoft customers that don't want a full off-premises commitment to the Azure cloud have a familiar -- and licensed -- database available for on-premises deployment of Azure Stack, he told LinuxInsider.
"It's a half step for customers who want to modernize their code a little bit but have concerns that keep them out of a multitenant cloud," Teich said. "They want the flexibility to take advantage of a hybrid cloud if needed."

Customer Demand

There is demand for stable, scalable databases, said Ron Pacheco, director of product management for Red Hat's platform business unit -- though he could not speak to any specific level of demand that might be driving Microsoft's strategy.
"Customers are openly embracing the open source development model, even for their own internally developed applications," he told LinuxInsider, "as it leverages the collective intelligence of a large community of mutually interested parties that yields very fast innovation."
Microsoft first announced plans to offer SQL on the Linux platform early last year, noting that it was important to make the database available across multiple platforms. Enterprise customers have gravitated toward Linux not only for its relatively lower costs, but also for its fewer security risks.
"Linux is the dominant operating system of the cloud and of open source tools and platforms," said Doug Henschen, principal analyst at Constellation Research.
Microsoft needed to make Azure attractive to cloud developers across platforms, he pointed out.
"Microsoft had to fill that gap," Henschen told LinuxInsider, "and indeed, it recently introduced a MySQL cloud service -- the open source database -- that runs on Linux."
There are several takeaways from Microsoft's announcement of SQL Server 2017 RC1, suggested Al Gillen GVP for software development and open source at IDC.
"Microsoft is truly supporting a mixed open source/Windows support model," he told LinuxInsider.
Further, "SQL Server on Linux is a competitive alternative to Oracle on Linux, giving customers that want an enterprise-quality database that is commercially supported on Linux two choices," Gillen pointed out.
Clearly, he said, "there is room to grow the SQL Server business into the Linux space." 

iPhone 8 leaked in live images, tipped to launch in September alongside iPhone 7s & 7s Plus


Apple's tenth anniversary edition phone aka iPhone 8 is already in mass production, reveals a latest report. Going by recent reports, Apple seems to have already started preparing for the iPhone 8 launch, which is possibly going to happen sometime around next month, i.e September. However, the company is yet to officially confirm the launch date. Ahead of the launch, iPhone 8 leaks and rumours are coming into the limelight almost every passing day. Last day a leak - which is supposedly the final design of the iPhone 8, suggests that the handset will come with edge-to-edge screen and there seems to be no fingerprint sensor at all. But the leaked image wasn't that clear to check out every detail of the handset. However, it is for the first time now that a crystal clear live image of iPhone 8 has been leaked online.
The new live image shows both front and rear panel details of the iPhone 8. The first thing that anyone would notice in the image is the bezel-less design.  So, iPhone 8 is going to be a bezel-less phone, which is quite a possibility and some rumours have also suggested the same.
Going by the leaked live images of the iPhone 8, there will be no fingerprint sensor. To recall, similar was suggested by Evan Blass, a popular tipster who leaked an iPhone 8 image, calling it the final design of the phone. He mentioned that the fingerprint sensor is missing. It is nowhere to be seen, neither at the back nor in the front. The new live images of the iPhone 8 also show the same. There's no presence of the fingerprint sensor anywhere. However, in another latest leak which shows the casing of the iPhone 7s, iPhone 7s Plus, and iPhone 8 shows a cut out at the back for the fingerprint sensor.

This new leak comes from Techtastic.com. The images show the iPhone 8 in glossy black colour, with Apple's logo placed at centre back. The image which shows the back side of the device shows the presence of a vertical dual camera setup. The same has been revealed quite a lot of time by all the rumours and leaks about the device. So, basically there is still confusion if the upcoming iPhone will come fingerprint sensor or not. However, things are expected to be having clarity with time.
Further, KGI Securities' Ming-Chi Kuo who earlier predicted that iPhone 8 will be announced later than the iPhone 7s and 7s Plus, now says that all three models will be announced together, sometime around the month of September. The report comes from Apple Insider. He further reveals that the handset by Apple will go on a verification test in August and the mass production of the device will kick start by mid-September. Reports are also that initially iPhone 8 will be available in limited quantities. While some reports also claim that the supply chain will produce between 2 million to 4 million units of the phone only this quarter. Another report which leaked online shows iPhone 8 in three colours -- Black, Silver, and Gold.

New AI Assistant Digs Up Specialized Info for Makers


Avnet last week unveiled a beta version of Ask Avnet, an automated virtual assistant that combines artificial intelligence with on-demand access to industry experts.
Ask Avnet targets "engineers, designers, hobbyists, makers and purchasing specialists across the electronics supply chain -- which includes the product manufacturing chain," said Kevin Yapp, senior vice president for digital transformation at Avnet.

Ask Avnet gathers information from the company's Web-based ecosystem -- including Avnet.com, element14.com and Hackster.io -- and soon will include access to Avnet's MakerSource.io and PremierFarnell.com properties.
Ask Avnet leverages AI to help anticipate a user's next move and provide the best answer, rather than listing all possible answers.

How Ask Avnet Works

Ask Avnet "aims to shorten the amount of time it takes for Avnet customers to access information," Yapp told TechNewsWorld.
Avnet customers who already are familiar with an Avnet property, such as hackster.io or element14, "will discover more parts or components or choices within Avnet ... with a simple click," he pointed out.
Initially, users interact with an automated assistant for fast answers to everyday questions. They are connected to the appropriate Avnet expert when necessary.
Ask Avnet works on both desktops and mobile devices. However, during the beta phase, the focus is on desktops.
"Full support for mobile devices will be available when we make the tool open to all visitors across our Web properties later this fall," he said.

Going With the Flow

Avnet "isn't the first to do this," remarked Jim McGregor, principal analyst at Tirias Research.
"Microsoft's doing something, and so are other companies, where the digital assistant is really targeted towards the application or the area of expertise that they're looking for," he told TechNewsWorld.
That said, this is "the way to provide better customer service, reduce costs associated with professional labor, and reduce the number of questions people come up with," McGregor pointed out.
The use of AI and machine learning in the enterprise is likely to increase is one way businesses can increase their competitiveness and thus is likely to increase, ABI Research has predicted.
However, Ask Avnet is "more of a limited response system, and we've had those for some time," noted Rob Enderle, principal analyst at the Enderle Group.
"You could do what they're demonstrating with scripts, for the most part, and an old expert system," he told TechNewsWorld. "This is clearly just an early test of concept, but the true AI doesn't appear to exist yet."

Ask Avnet Benefits

It's likely that product developers who use Ask Avnet eventually will get higher-quality advice more quickly, Enderle said. "Once the AI truly kicks in, the experts become largely virtual, with escalation to people decreasing over time as the AI learns from the interaction."
Combining the ability of a digital assistant with a virtual database to target specific customers in specific areas such as engineering "has so many benefits," McGregor said. The system can "continue to learn and evolve with all the information that's going into it."
Avnet is gunning for B2B companies, Yapp said.

The Specter of Google, Apple and Microsoft

Ask Avnet is more like Google Voice, Microsoft's Cortana or Siri than it is like IBM Watson, Yapp said.
The global enterprise market for voice recognition technologies will grow from US$44 billion in 2016 to $79 billion in 2021, according to BCC Research.
Ask Avnet does not have voice capability, which raises the question whether enterprises might turn to Google, Apple or Microsoft, whose assistants combine AI with machine learning and voice.
"You do want to add voice," Tirias' McGregor said, "but, a lot of times when dealing with engineers, you may need to bring up visual information as well."

Tuesday 8 August 2017

Facebook Adds Hardware, Software Vetting and 4K to 360 Live


Facebook on Tuesday announced several updates to its live-streaming platform, including a new hardware and software vetting program used to create 360-degree video.
Through its new Live 360 Ready Program, Facebook will review hardware and software and approve products that work well with its Live 360 offering. Products deemed "ready" for Live 360 will be allowed to display a Facebook Live logo on their packaging or website.
"Each camera's app or Web experience will enable you to interact with your friends and followers through direct access to Facebook's native reactions and comments," noted Facebook Product Manager Chetan Gupta and Product Marketing Manager Caitlin Ramrakha in an online post.
Facebook has approved 11 cameras and seven software suites so far.
Live 360 Ready cameras included Giroptic iO, Insta360 Nano, Insta360 Air, Insta360 Pro, ION360 U, Nokia Ozo, Z CAM S1, 360Fly HD, 360Fly 4K and 360Fly 4K Pro.
Live 360 Ready software packages included Assimilate SCRATCH VR, Groovy Gecko, LiveScale, Teradek, Voysys, Wowza and Z CAM WonderLive.
"The way we communicate is getting more and more visual, and live 360 video is the richest medium of all," said JK Liu, CEO of Insta360, maker of a Live 360 Ready camera.
"We're excited to bring Facebook users a way to go live in 360 that fits in seamlessly with the way they already use their phones,"

4K Added

Facebook also announced that Live 360 streams will support 4K resolution. What's more, it will be available in virtual reality.
"Live 360 broadcasts will be available to watch in VR -- both while they're happening and after they're over -- in our free Facebook 360 app for Gear VR, available on the Oculus Store," Gupta and Ramrakha wrote.
Resolution has been frustrating for some 360 video content providers on Facebook, said Chris Michaels, streaming industry evangelist at Wowza Media Systems, a Live 360 Ready software maker.
"One of the biggest challenges for content creators has been delivering in a high enough resolution to provide breathtaking 360 degree experiences," he told TechNewsWorld. "With 4K, we don't have to worry about rendering down high-quality video and can deliver it at its optimal design rate."
Facebook also will be adding donate buttons and scheduling to Live 360.
Donate buttons allow nonprofits to raise funds when they stream a Live 360 broadcast -- either their own or someone else's.
Scheduling allows Live 360 broadcasters to alert their friends and followers of an upcoming broadcast. The alert is posted to their news feeds, where they can choose to receive a reminder alert when the broadcast is about to start.

Post-Production Tools

Facebook announced a number of new post-production tools for Live 360 as well.
If it detects shakiness in a video, Facebook will use its stablization tool to steady it.
With the guide tool, a video author can identify points of interest in a video and direct viewers to them.
If you're wondering what parts of your video most engage your audience, there's a heatmap tool that shows you that.
Finally, there's a crossport tool for broadening the distribution of your video.

Content Play

The latest updates to Live 360 are a content play, said Ross Rubin, principal analyst at Reticle Research.
"It's about encouraging content and ensuring a level of compatibility and quality control over that content," he told TechNewsWorld.
The updates also are a way to help Facebook compete with YouTube.
"Being a video platform rival to YouTube has been a longstanding goal of Facebook," Rubin said.
The upgrades are aimed more at professional video and advanced content creators than mainstream users, noted Jack Kent, a senior analyst with IHS Markit.
However, they "should increase the amount of Live 360 content for Facebook users," he told TechNewsWorld.
Facebook has been expanding its 360 video and live video strategies rapidly in recent months, Kent pointed out.
"It rolled out Live 360 to all pages earlier this year," he said, "and integrated with a range of leading camera makers and added new audio tools. The new Live 360 Ready Program aims to extend support for a wider range of third-party software and devices."

SparkyLinux 5: Great All-Purpose Distro for Confident Linux Users

When I first reviewed the Game Over edition of SparkyLinux several years ago, I called it one of the best full-service Linux distros catering to game players you could find. That assessment extends to last month's release of the non-gaming edition of this distro.
The latest edition of SparkyLinux, version 5.0 "Nibiru," finds its true calling as a Linux distro that falls between those that are beginner-friendly and those that require some amount of Linux knowledge.
You can get a variety of editions with different desktop offerings, such as E19, LXDE and Openbox, from versions 4.5, 4.6 and 5.0. However, the best option now is the collection of upgraded features in the latest release.
SparkyLinux 5.0 is based on the testing branch of Debian. It features customized lightweight desktops that include LXDE, Enlightenment, JWM, KDE, LXQt, Openbox, MATE and Xfce. It comes with multimedia plugins, a selected sets of apps, and its own custom tools to ease different tasks.
Regardless of which lightweight desktop or window manager option you prefer, SparkyLinux gives you an operating system that is out-of-the-box ready for use. If you are a Linux purist with very particular distro demands, you can opt for one of the SparkyLinux minimalist releases.
For example, Sparky MinimalGUI gives you Openbox or JWM desktops under the hood and MinimalCLI provides a text-based interface. Both alternatives let you use the Sparky Advanced Installer to load the base system with a minimal set of applications.

Sparks Fly

SparkyLinux is not exactly a beginner-friendly distro, but Linux newcomers will find a much lower learning curve than many other Linux distros require.
The most satisfied user will have some working knowledge of how to set up a Linux system and use its core apps.
At first blush, you might get the idea that SparkyLinux is a Puppy Linux wanna-be distro. You can run it from a thumb drive. You also can supercharge its performance by loading it into your computer's RAM. The lightweight desktop offerings are intended to be fully hard-drive based, though. SparkyLInux does not use a frugal installation and special antics to provide persistent memory.
SparkyLinux is most attractive to two targeted user groups. One group wants an all-purpose home edition with all the tools, codecs, plugins and drivers preinstalled. The other group wants a distro either with everything ready and working from the first run with a minimal set of tools, or a base system waiting for them to set up their way.
Either way, SparkyLinux is a very functional Linux OS that can put a spark in your daily computing experience.

First Impressions

SparkyLinux uses Synaptic as the distro package manager. It gets its distro-specific packages from the Debian testing repository.
This release runs Linux kernel 4.11.6 as default. If you are more adventurous, you can opt to install Linux kernel 4.12.x from the Sparky Unstable repository.
SparkyLinux 5 has a clean and attractive main menu design. I selected the MATE desktop option because I have the least experience with it compared to the other desktop and window manager options.
ATE is a neat looking desktop environment that belies the lightweight desktop label. Other than missing some of the glitz and glitter of the special graphics effects you get with KDE or Cinnamon desktops, there is not much you can not do with MATE to get SparkyLinux to behave your way.
Workspace Switcher on the panel bar is preconfigured to show two virtual desktop spaces. Changing that setting is as simple as right-clicking on the switcher and clicking the plus or minus symbol in the pop-up workspaces menu. Follow a similar method for updating the date and time already displayed in the notifications area.
This convenient level of setup is largely a function of the desktop interface. Typically, you have to hunt down the settings in the system tools on the main menu. SparkyLinux has a very obvious focus on user interface convenience.

nstalling It

SparkyLinux 5.0 has three installation options. The live session ISO has a direct link to the distro's online installation guide. This is very useful.
The Calamares installer is for general and experienced Linux users. The Sparky Installer provides an easy-to-fathom explanation of setting up partitioning choices based on the target computer's RAM size. This is the preferred installation option for newcomers.
The SparkyLinux Advanced Installer, forked from the Remastersys Installer several years ago, is a lightweight option that works well on older hardware that balks at running the main installers. The Advanced Installer has two modes: Yad-based graphical mode and text mode via a command line.
Be careful if you choose this installer option. The Advanced Installer requires greater attention than the other more automated installer options. This choice is less ideal for inexperienced Linux users.

Bottom Line

Other factors make using SparkyLinux 5 a smart decision. One is its use of a rolling release schedule that pushes the latest packages and edition upgrades as they are ready, without requiring a complete reinstallation.
Starting out, I referred to SparkyLinux as one of the best full-service Linux distros available. Of course, that is a subjective evaluation, but having installed and tested the latest editions of countless Linux distros on a weekly basis for years, I've developed a sixth sense for what makes a great choice and what does not.
SparkyLinux 5 is one of those great choices. Check it out.
Want to Suggest a Review?
Is there a Linux software application or distro you'd like to suggest for review? Something you love or would like to get to know?
Please email your ideas to me, and I'll consider them for a future Linux Picks and Pans column.
And use the Reader Comments feature below to provide your input!

Amazon's Secret 1492 Health Team Sets Sail



A secret Amazon team, dubbed "1492," has been working on a skunkworks project devoted entirely to healthcare, CNBC reported Thursday. The unit has been developing hardware devices and software applications related to electronic medical records, telemedicine and other health-related issues.
The "1492" moniker refers to the year that Christopher Columbus made his voyage to the Americas, but perhaps the Amazon team missed the irony that Columbus actually did not realize he had "discovered" a new continent and thought he was somewhere else.
Nonetheless, it's clear that Amazon's aim is to cover the bases in the healthcare arena, likely a bid to cash in on the sector's massive profit potential.
The greater U.S. healthcare market experienced double digital growth from 2000 to 2011, with an increase in U.S. revenue from US$1.2 trillion to $2.3 trillion, according to the Centers for Disease Control. That figure likely will grow at an increasing rate as healthcare costs in America continue to skyrocket.

Full Coverage

One of the goals of Amazon's 1492 team appears to be ensuring that Amazon develops a foothold in multiple segments of the lucrative healthcare industry. The latest news builds on an earlier announcement that Amazon has been exploring the possibility of selling pharmaceuticals.
The 1492 team reportedly has been working on ways to streamline medical records management, so as to make the information available to consumers and doctors more readily. In addition, it reportedly has been considering a plan that could improve U.S. healthcare for those with limited access to a doctor. It could include the development of a new telemedicine platform that would allow patients to have virtual consultations with doctors.
Amazon is not entirely new to the medical world, as it already has developed health applications. The next step could be greater connectivity options between its medical devices and other proprietary products, such as its artificial intelligence assistant, Alexa.
"Healthcare is the biggest sector in the economy and ripe for innovation," said Roger Entner, principal analyst at Recon Analytics.
"Nobody spends more on healthcare than the U.S.,while many countries have significantly better outcomes for their citizens than the U.S.," he told TechNewsWorld.

Healthy Market

Amazon is not the only company that has been exploring opportunities in the world of healthcare. Apple, Google and Microsoft each have launched their own initiatives.
"It makes sense for all these companies to be investing in AI for healthcare, because along with AI in transportation, AI in healthcare will change society," said Jim McGregor, principal analyst at Tirias Research.
With access to all the medical scans, diagnoses and feed information that is available from the major healthcare providers, artificial intelligence would do a better job in some respects than a human, he told TechNewsWorld.
"With its massive data centers and AI capabilities, Amazon is well positioned to be a leader in this area, but it needs to get access to the data, which has been the biggest challenge," added McGregor. "Note that it's only been within the last decade that the majority of medical information has transitioned to electronic form, so it would have been almost impossible to do before."

Cloud Computing and Healthcare

With advances in the archiving of digital information and deep learning, the time could be right to leverage AI for healthcare. However, regulations and privacy concerns could be major challenges, at least in the short term.
"Unfortunately, many healthcare providers are trying to maintain control of all this data," said McGregor.
"In the U.S., in particular, healthcare providers hide behind HIPAA regulations, which state that you need to keep the patient's personal information private, not that you can't share the anonymized information," he added.
Healthcare organizations would have to be persuaded to share their data, even though doing so would leverage a third-party service provider like Amazon. Would the healthcare industry even consider such cooperation?
"Up to now, the answer has been no -- but it could significantly lower their costs and improve the quality of services provided," added McGregor.
In the long term, "it will take an independent third party like Amazon to maximize the benefits of AI in healthcare," he suggested.
That is why the various players are entering this very controlled market -- one that has both potential and hurdles -- so cautiously.
"We are so early in the digitization of healthcare that nobody is really leading," said Recon Analytics' Entner.
"There is definitely demand, but everyone needs to buy in for it to work for everyone," he said. "The reason why everyone is flocking to it is market size, but the obvious fact is that it can be done better, and nobody is doing it remotely right."

Why Facebook's Willow Beats Apple's Saucer


Facebook knocked it out of the park with its financials last week, and a lot of its success comes from Zuckerberg's unique focus. Unlike other firms that jump from project to project, ranging widely from what makes them money -- like Google -- Facebook stays close to what made it successful. There is no stronger evidence than when you compare the two office projects from Apple and Facebook.
The huge Apple Flying Saucer (sadly, it doesn't fly) is nearing completion. Facebook recently announced it too was building a new showcase site, called "Willow," but Facebook was building the first arcology at scale.
This will give Facebook some bragging rights. While its new campus might not be as advanced-looking as Apple's, it will be more socially, environmentally and organizationally attuned. millennials really like two of those three concepts a lot, suggesting Facebook will be more attractive to the best and brightest, and that its site will be more advanced where it counts.
I'll explain why Facebook will soon set the bar when it comes to forward-looking office design at scale, and why its new facility may represent the future of office design.
I'll close with my product of the week: the Sleeptracker, an interesting sleep aid from Beautyrest.

HomePod Devs Stumble Upon Next iPhone Design Clues

Developers combing through the code for the Apple HomePod have found clues to what appear to be features in the next generation of iPhones, and they tweeted their discoveries on Sunday.
The firmware for HomePod, Apple's US$349 smart speaker expected in December, apparently contains much of the codebase for future iPhones. One of the goodies in the HomePod's code is a new biometric method for unlocking an iPhone.

The use of facial recognition to unlock a phone has been around in the Android world for more than a year, and reactions have been mixed.
"Recent reviews of the face lock feature on Samsung's Galaxy S8 remarked on the slowness of the process, its ineffectiveness in full daylight, and that early iterations were easily fooled with simple photographs," noted Charles King, principal analyst at Pund-IT.
"Interestingly, face lock can't be used to authenticate Samsung Pay purchases.," he told TechNewsWorld.

Tough to Fool Camera

"Facial recognition really hasn't taken off with Android," said Ross Rubin, principal analyst at Reticle Research.
Users appear content to use the fingerprint sensors found in most phones, but that may change, he explained.
"With display aspect ratios changing, and room disappearing for front-mounted fingerprint sensors, some sensors have been moved to the back of the phone, which can make things trickier for users," Rubin told TechNewsWorld.
Because an infrared camera creates a 3D image of a face, it has advantages over face recognition performed with light-dependent cameras. For example, recognition can be achieved regardless of lighting conditions.
What's more, the technology is more difficult to game than a conventional camera.
"Things like a picture or mask of a face won't unlock a phone if the infrared is done right," observed Kevin Krewell, principal analyst at Tirias Research.
Facial recognition also can be faster than other unlock methods, he told TechNewsWorld.
"If face unlock works, you don't have to use a pin or thumb reader to gain access," said Tim Bajarin, president of Creative Strategies.
"In theory, it is even more secure," he told TechNewsWorld, "since it scans more data points from a face to make sure it is you that is accessing the iPhone."

Edge-to-Edge Evidence

The next iPhone may feature facial recognition for technical reasons, suggested Krewell.
"Apple wanted to put the fingerprint sensor under the display, but they haven't been able to get it to work yet," he pointed out.
Developers scrutinizing HomePod's code found information about that display. An image in the firmware of the front of the phone appears to show an almost edge-to-edge display that extends around the speakers and sensors at the top of the device.
Rumors of an edge-to-edge OLED display in a special anniversary edition iPhone have been circulating for months, and these latest observations by developers are more evidence that those rumors have been on target.
Near edge-to-edge screens have become common fare in the Android market.
"I've used a phone with an edge-to-edge screen, and they're pretty nice," said Bob O'Donnell, chief analyst at Technalysis Research.
"Once you get used to that complete sheet of glass with nothing around it, it's hard to go back to screens with bezels around them," he told TechNewsWorld.

Buzz Reaching Crescendo

The "iPhone 8" -- or whatever it is named -- will be a premium product selling in the $1,000 to $1,400 price range, by all accounts. If that's the case, it would makes sense for it to have an edge-to-edge display.
"Edge-to-edge is the path that premium phones are taking," Reticle's Rubin noted. "If you're not taking that approach now, it's perceived that you're not at the leading edge of design, where Apple wants to be."
As the announcement window for the new iPhones shrinks and the rumors about them get stronger, the buzz is getting louder.
"Apple's launch strategy for the iPhone 8 is right on the money," said Andreas Scherer, managing partner with Salto Partners.
"The expectations associated with its release are rising to a crescendo," he noted.
"The buzz around the new iPhone 8 might take away some of the punch from the standard iPhone 7s models," Scherer told TechNewsWorld. "Ultimately, though, as Steve Jobs correctly stated, if you don't cannibalize your own business someone else will."

WSL to Ship With Windows 10 Fall Creators Update


Microsoft has announced that Windows Subsystem for Linux will emerge as a fully supported part of the Windows 10 Fall Creators Update when the operating system ships later this year.
The new status means that early adopters in the Windows Insider program no longer will see the subsystem's status as "beta," beginning with Insider build 16251, Microsoft Senior Program Manager Rich Turner noted in an online post last week.
With the updated status, WSL can be leveraged as a day-to-day developer toolset, he said.
Microsoft will continue to respond to issues and bugs posted on the WSL issues GitHub repo and respond to questions on Twitter as well as contribute to discussions on Stack Overflow, Ask Ubuntu, Reddit and others, according to Turner.
The key WSL scenarios include the following:
 run Linux command-line tools for development and basic administration;to share and access files on the Windows filesystem from within Linux; andto invoke Windows processes from Linux and invoke Windows processes from the Windows command line.
Linux distros running atop WSL are for interactive user scenarios, Turner said, but not for running production workloads on apache/nginx/MySQL/MongoDB/.
Linux files are not accessible from Windows, but the company is working to improve that, he added. There currently are no plans to support X/GUI apps, desktops or servers.

Build to Last

The planned release of WSL comes after more than a year of development, testing, and troubleshooting to address a wide range of issues.
Microsoft announced at last year's Build conference that Ubuntu Linux would run on Windows 10 -- a coup for Canonical. The firm had been working with Microsoft on the project for many months before the formal announcement.
"We are working to make Windows 10 the best development platform to design, develop, test and deploy code for all platforms and devices," Microsoft said in a statement provided to LinuxInsider by company rep Andrew Lowe.
The Windows Subsystem for Linux allows developers to run Linux environments -- including most command line tools, utilities and applications -- directly on the Windows OS, unmodified, without the overhead of a virtual machine, the company said.
The WSL engineering team implemented hundreds of fixes and improvements, most of them catalogued in the WSL release notes, Microsoft noted.

Flexibility for Developers

The WSL allows developers to run Linux command lines under three distributions -- Ubuntu 16.04 LTS, OpenSuse Leap 42 and SLES 12 -- from within a Windows 10 instance, said Paul Teich, principal analyst at Tirias Research.

The WSL is not a virtual machine -- it runs a Windows 10 instance, so developer tools that run in Windows 10 and under Linux can co-exist and share filesystem and other resources at the same time, he told LinuxInsider.
"Microsoft enabled this feature because some popular developer tools are based on libraries that only run under Linux -- they haven't been ported to Windows," Teich said, adding that this situation is unlikely to change, as developers continuously create new Linux-based tools.
"WSL enables developers to use a wider range of their favorite tools under Win 10 and in peaceful coexistence with Win 10, which will increase their productivity," he suggested.
From Microsoft's standpoint, said Teich, the more developers use Windows 10 as their production environment, the more they might consider targeting their apps for Windows 10.



App Scan Inc

The app as a Free app. This SERVICE is provided by at no cost and is intended for use as is. This page is used to inform visitors regarding ...