How open-source software transformed the business world

Eric S. Raymond, one of open-source’s founders, said in his seminal work, The Cathedral and the Bazaar,  “Every good work of [open-source] software starts by scratching a developer’s personal itch.” There’s a lot of truth to that. Vital programs such as the Apache web server, MySQL, and Linux began that way and numerous smaller programs did too. But it’s not likely many people had a personal itch to create giant vertical programs such as telecommunications’ OpenDaylight and OPNFV or Automotive Grade Linux (AGL)’s Unified Code Base. Today, vertical companies focused on narrow interests also embrace open-source methods and software with open arms.

Open Source

Why? Because open source just works. 

This isn’t just my observation. A recent McKinsey & Company report, How software excellence fuels business performance, found, the “biggest differentiator” for top-quartile companies in an industry vertical was “open-source adoption,” where they shifted from users to contributors. The report’s data shows that top-quartile company adoption of open source has three times the impact on innovation than companies in other quartiles. In other words, successful companies don’t just use open-source programs, they actively work on their industry’s open-source projects.

This notion still stumps many business leaders. How can contributing actively to something their rivals can use possibly help them in the market? What they don’t get, even now, is that as President John F. Kennedy said, “a rising tide lifts all boats.” When we share our resources, our work, and our expertise in open source, everyone benefits. But the companies which make the best of it are the ones which actively participate in open-source projects. 

Think that’s nonsense? How many of you are using Unix today instead of its open-source twin Linux? Look at almost any kind of software and you’ll see open source dominates. Look at all the top tech giants, Amazon, Google, IBM, and, yes even Microsoft, with the exception of outlier Apple, all of them are either built on top of open source or use it extensively.

The Linux Foundation in its latest report, Software-defined vertical industries: Transformation through open source, explained how this has worked. The Foundation has found vertical industries, such as automotive, motion pictures, finance, telecommunications, energy, and public health initiatives have all switched to open-source approaches. 

Indeed The Linux Foundation itself is an example of how open source can transform an institution. It’s expanded from a single project, the Linux kernel, to hundreds of distinct project communities. Its “foundation-as-a-service” model supports communities collaborating on open source across key horizontal technology domains, such as cloud, security, blockchain, and the web. 

In vertical industries, some businesses do the same things they’ve always done over and over. They improve their core competitive advantage speed or costs, but the model remains the same.   

Others, the ones which are now succeeding, have taken a different path. In what’s now called “digital transformation,” they take the core business models and processes and transform them into open-source software and services. There are many ways to do this: Code, application programming interfaces (API)s, cloud assets, and containers. At the end of the day, though, they all are turning business processes and assets into software-defined services. 

Stephen O’Grady, co-founder of Red Monk, the developer-focused analyst firm, saw this coming in his 2013 book, The New Kingmakers: How Developers Conquered the World. Moving to a software-defined model is a radical shift. Open source has made this possible for many of them since most of them started this transformation with relatively small software development teams.

The Linux Foundation goes into many examples, but I’m going to focus on telecommunications and networking since it’s a field I know well. 

Historically, telecommunications companies’ networks were built on standards-based but proprietary, black box highly customized hardware. Capital investments to switch from one technology to another, such as the move from 2G to 3G, cost billions of dollars.

These companies are fiercely comp­etitive with each other. I don’t need to tell you that. Any night you watch commercial TV in the US, you’re sure to see ads from AT&T, T-Mobile, Verizon, and their smaller competitors and partners. When it comes to winning customers, these companies are at each other’s throats.

But, they’re also trying to solve similar problems. By the 2000s, it was becoming crystal clear that the old client-server models would not be up to the challenge of hundreds of millions of mobile phone users constantly on the move. By 2004, the first work towards what would become Software-Defined Networking (SDN) would be underway with the Internet Engineering Task Force (IETF) Request for Comment (RFC) 3746 Forwarding and Control Element Separation (ForCES) Framework.

Earlier this might have proven the basis for a telecommunications standard. In the 2000s, however, these ideas helped create an open-source project, OpenFlow. This defined standard communications interface between a Software-Defined Network (SDN) control and forwarding layers. Major technology companies such as Deutsche Telekom, Google, Microsoft, Verizon, and Cisco all adopted it and started using it. 

Verizon, in particular, didn’t want to keep pouring billions of dollars into proprietary hardware, while at the same time making sure its networks would work with its rivals. Other companies such as AT&T also realized that solving the same problems of network automation on its own was a waste of time and money. 

So, in 2013 AT&T spurred the industry into action by releasing Its open vision for telecommunications’ future in its Domain 2.0 whitepaper. In it, AT&T proposed to transform its networking businesses from its “current state to a future state where they are provided in a manner very similar to cloud computing services, and to transform our infrastructure from the current state to a future state where common infrastructure is purchased and provisioned.” Today, that vision has largely been realized, not just by AT&T, but by its frenemies as well.

The Linux Foundation helped this to happen by providing a neutral arena for the companies to work together. Today, under the umbrella of Linux Foundation Networking (LFN) 8 different networking projects and as many related projects have brought almost the world’s major telecommunications company together. 

Today, over 70% of the world’s mobile phone users are using services built on LFN’s open-source projects. Altogether, the telecommunication companies’ programmers have contributed 78 million lines of source code to LFN projects over the last six years. Using a Constructive Cost Mode (COCOMO) valuation model, those contributions would have required a research and development cost of over $7.3 billion to create using conventional proprietary methods.

You’ll find similar stories of competitors coming together to save billions of dollars from public health to energy to financial tech. Yes, these vertical industries are all very different and face unique challenges, but they also share a common thread. As The Linux Foundation put it, “All of them realized that open collaboration presents opportunities reducing costs, time to market, increasing quality, and opening new areas of competition. The ability to achieve these results on a collective basis pushes innovation forward across respective industries.”

If you’re not using open source in your business yet, you should be. Your business future depends on it. It’s no longer just a good idea, it’s a necessity in today’s ever-changing, ever faster business economy.

Related Stories:

CISA says a hacker breached a federal agency

Image: Jacob Creswick

A hacker has gained access and exfiltrated data from a federal agency, the Cybersecurity and Infrastructure Security Agency (CISA) said on Thursday.

The name of the hacked federal agency, the date of the intrusion, or any details about the intruder, such as an industry codename or state affiliation, were not disclosed.

CISA officials revealed the hack after publishing an in-depth incident response (IR) report detailing the intruder’s every step.

The report, which ZDNet analyzed today, reveals how the intruder gained access to the federal agency’s internal networks through different channels, such as leveraging compromised credentials for Microsoft Office 365 (O365) accounts, domain administrator accounts, and credentials for the agency’s Pulse Secure VPN server.

CISA said the attacker logged into Office 365 accounts to view and download help desk email attachments with “Intranet access” and “VPN passwords” in the subject line. Attackers searched for these files despite already having privileged access to the agency’s network, and most likely in an attempt to find additional parts of the network they could attack.

The attacker also accessed the local Active Directory, where they modified settings and studied the structure of the agency’s internal network.

To have a quick way back into the federal agency’s network, the hackers installed an SSH tunnel and reverse SOCKS proxy, custom malware, and connected a hard drive they controlled to the agency’s network as a locally mounted remote share.

“The mounted file share allowed the actor to freely move during its operations while leaving fewer artifacts for forensic analysis,” CISA analysts said.

Furthermore, the attacker also created their own local account on the network. By analyzing forensic evidence, CISA said the hacker used this account to browse the local network, run PowerShell commands, and gather important files into ZIP archives. CISA said that it couldn’t confirm if the attacker exfiltrated the ZIP archives, but this is what most likely happened in the end.

In addition, CISA said the malware the hackers installed on the federal agency’s network “was able to overcome the agency’s anti-malware protection, and inetinfo.exe [the malware] escaped quarantine.”

Nonetheless, investigators said they detected the intrusion via EINSTEIN, CISA’s intrusion detection system that monitors federal civilian networks from a vantage point and was able to compensate for the attacker bypassing local anti-malware solutions.

Microsoft removed 18 Azure AD apps used by Chinese state-sponsored hacker group


Special feature

Cyberwar and the Future of Cybersecurity

Cyberwar and the Future of Cybersecurity

Today’s security threats have expanded in scope and seriousness. There can now be millions — or even billions — of dollars at risk when information security isn’t handled properly.

Read More

Microsoft said today that it removed 18 Azure Active Directory applications from its Azure portal that were created and abused by a Chinese state-sponsored hacker group.

The 18 Azure AD apps were taken down from the Azure portal earlier this year in April, the Microsoft threat intelligence team said in a report published today.

The report described the recent tactics used by a Chinese hacker group known as Gadolinium (aka APT40, or Leviathan).

The Azure apps were part of the group’s 2020 attack routine, which Microsoft described as “particularly challenging” to detect due to its multi-stage infection process and the broad use of PowerShell payloads.

These attacks began with spear-phishing emails aimed at the target organizations, carrying malicious documents, usually PowerPoint files with a COVID-19 theme.

Victims who opened one of these documents would be infected with PowerShell-based malware payloads. Here is where the malicious Azure AD apps would also come into play.

On infected computers, Microsoft said the Gadolinium hackers used the PowerShell malware to install one of the 18 Azure AD apps. The role of these apps was to automatically configure the victim’s endpoint “with the permissions needed to exfiltrate data to the attacker’s own Microsoft OneDrive storage.”

Image: Microsoft

By removing the 18 Azure AD apps, Microsoft crippled the Chinese hacker group’s attacks, at least for a short while, but it also forced the hackers to re-think and re-tool their attack infrastructure.

In addition, Microsoft said it also worked to take down a GitHub account that the same Gadolinium group had used as part of its 2018 attacks. This action may not have had an impact on new operations, but it did prevent the hackers from reusing the same account for other attacks in the future.

Microsoft’s actions against this Chinese hacker group aren’t an isolated case. Over the past few years, Microsoft has consistently intervened to take down malware infrastructure, may it have been used by low-level cybercrime operators or by high-end state-sponsored hacker groups.

In previous interventions, Microsoft also targeted the infrastructure used by other nation-state groups, tied to Iranian, North Korean, and Russian cyber-operations.

SharePoint Syntex to automate content categorization and build a foundation for knowledge curation

Microsoft announced the general availability of Microsoft SharePoint Syntex as of Oc. 1, 2020. This is the first packaged product to come out of the code-name Project Cortex initiative first announced in November 2019. Project Cortex reflects Microsoft’s ongoing investment in intelligent content services and graph APIs to proactively explore and categorize digital assets from Microsoft 365 and other connected sources. 

SharePoint Syntex will be available to M365 customers with E3 or E5 licenses for a small per-user uplift. As of this writing, we anticipate it to be around a $5 per-user per-month list price, but this may be subject to change. SharePoint Syntex delivers some of the foundational artificial intelligence and machine-learning (ML) services that will help information managers understand, process, and tag content automatically. The second phase of the Project Cortex launch — tools for knowledge curation and management — is expected in later 2020. 

Why Is This Important? 

Too many organizations have ignored the importance of a solid information architecture and metadata strategy — whether they are using SharePoint or not. The enhancements delivered in SharePoint Syntex could help get these strategies back on track. Organizing and tagging documents at large scale is a daunting task that currently requires a great deal of human labor — but is important work to form a strong foundation on which to build an enhanced set of knowledge discovery, delivery, and curation capabilities in the near to mid-future. Sharepoint Syntex is positioned to automate some of this intensive labor and drive key outcomes. 

What Is It And What Does It Do? 

Microsoft SharePoint Syntex will deliver new ways of managing large volumes of documents via a new “content center,” which brings various intelligent content services — AI, ML, optical character recognition, enhanced taxonomy services, etc. — to document libraries. Microsoft is taking some of the most relevant Azure cognitive services and infusing them into M365 via this SharePoint Syntex add-on product. New model-building features will allow subject matter experts and content stewards to define and refine how the intelligent services analyze, tag, and extract data from documents. 

Highlights of SharePoint Syntex available in October include: 

  • Image and forms processing. Images can be automatically tagged by leveraging what Microsoft calls a “new visual dictionary” to apply metadata descriptors when common objects are recognized in an image (including JPGs, PNGs, PDFs, and so on). Another service allows nontechnical users to build an AI model to automatically extract values from semistructured documents, such as dates, names, or addresses, from repeatable document types such as receipts or invoices. Microsoft claims that these form processing models can be trained with a small set of sample documents — perhaps fewer than 10 — if the right mix of positives and negatives are included. 
  • Document understanding. Longer text-heavy documents may have broad or long-term business value and benefit from consistent metadata tagging for better search and discovery. SharePoint Syntex can automate metadata tagging of content-rich documents. Microsoft has built this capability using the Language Understanding Intelligent Services for Documents (LUIS-D) model from its Azure Cognitive Services. These models, also built in the new content center, are trainable by subject matter experts and can be applied to multiple libraries. Formats include Office documents, text formats, PDFs, emails, etc. 
  • Automated compliance labels. This automatically extracted metadata not only aids in better search and retrieval, but it can be used to initiate a workflow process, apply a retention policy via Microsoft’s newish retention label feature, or leverage sensitivity labels to control access and distribution of the document. 

What Can Organizations Do With Microsoft SharePoint Syntex? 

Microsoft customers can work toward automating the organizing and tagging of documents (at scale) by: 

  • Experimenting with Syntex with a subset of your user licenses. This is not a feature to be flipped on and just work. It will require investment in time and internal expertise. Organizations wanting to pilot SharePoint Syntex can start with a small set of add-on licenses to get things started. Pick a set of documents or use cases that are causing productivity bottlenecks, are part of integral processes that can be driven by metadata, or that can enhance adoption of related retention or data protection policies with more consistent tagging. The initial release of SharePoint Syntex will support English, with other languages to come in the future. 
  • Working with a specially trained partner to hit the ground running. To Microsoft’s credit, it is not positioning these Cortex-inspired products as technology magic bullets. To make SharePoint Syntex (and subsequent product releases) really work, it will require human expertise, knowledge of business processes and information architecture, and skills to get projects up and running. Microsoft has launched a partner program specifically to train and enable select system integrators and independent software vendors, which can then support end-user customers. 
  • Gathering your information and knowledge management gurus into a dream team. Bring your experts to the (virtual) table and understand where to invest next. In conversations with large enterprises over the last 18 months to 24 months, it is clear to me that there is a renewed interest in managing digital knowledge assets better and smarter. The need to support virtual and remote workers has upped the stakes on a solid strategy for information management. Companies that will survive — even thrive — in the tumult that is 2020 understand the value of knowledge to serve customers as well as employees. 

This post was written by Principal Analyst Cheryl McKinnon, and it originally appeared here. 

More Microsoft Ignite

Lanmodo Vast Pro night vision camera: Good vision enhancements in the dark


  • Excellent night vision
  • Combined night vision and dash cam
  • Simple controls


  • Street light lensing on rainy evenings

I am impressed with the Lanmodo range of night vision cameras. I looked at the Lanmodo night vision camera in March 2020 and was impressed with the device. Now Lanmodo have introduced a new version of its camera – the Vast Pro.

The Vast Pro uses low-light imaging technology to deliver a nice image of the area up to 984 feet (300 metres) ahead, even in almost total darkness. It has a 45 degree field of view from its 1080p camera. The Vast Pro also acts as a dashcam to record activity in loops.

Lanmodo offers several options for the camera. The basic model, basic with a 128Gb TF card option, and optional rear camera.

Inside the box there is the camera itself, a base to mount the camera on your dashboard, a cigarette lighter power plug, an OBD (On-Board Diagnostics) adaptor, a suction mount, and a screwdriver.

If you have purchased the rear view camera system, in addition to the camera you will also see a rear view camera connecting cable.

The camera has a Sony CMOS sensor and an IPS 7.84 inch screen giving an 1920 x 1080p resolution from its 5MP camera. The TF card will record up to 28 hours of 28fps without the rear camera or 14 hours if the rear camera is in use.

Top ZDNET Reviews

The rear camera has a 170 degree field of vision and will record objects up to 20 metres away.

To set up the camera, insert the micro SD card, connect the Vast Pro to the power socket in the car and turn it on using a button on the top of the camera.

Settings can be accessed using the menu button, and set using the up and down buttons. You can record audio and video, and specify the loop recording time for 1.3 or 5 minutes. You can set the brightness of the camera, and modify the sensitivity of the sensor.

Other settings control the orientation of the menu. If you specify that the camera is installed on the window, then screen flips upside down. You can also set the system time, set the country, and format the TF card.

Lanmodo Vast Pro night vision camera good vision enhancements in the dark zdnet
Eileen Brown

The Vast Pro gives an amazing view ahead in dark areas and shows the definition of objects, and even clouds, well. These images were taken by me on a moonless night this week when parked, and whilst moving through town.

When the camera is stationary, the image is superb. In dark areas the view ahead is crystal clear, albeit a little distracting. I had the most problems in towns and high built areas.

Lanmodo Vast Pro night vision camera good vision enhancements in the dark zdnet

Street lamp ‘bubbles’ on left hand side of screen viewed through rainy windshield.

EIleen Brown

Car light flares washed out other parts of the screen, and street lights, as you drive past, show as ever expanding red bubbles which ‘burst’ as you drive past them. They are strangely compelling — and distracting.

Lanmodo Vast Pro night vision camera good vision enhancements in the dark zdnet

Street lamp flare on Camera lens. No rain on windshield.

EIleen Brown

To be fair, it was pouring with rain when I drove through town at night, and this lensing is not so obvious on dry nights.

I found myself focusing far too much on the camera watching the light flares. It was really difficult to capture the effect with my mobile phone, but these photos are not enhanced at all.

I did have the camera light on medium when I took the photos. I have since changed the camera to low light so it is less intrusive in the car – and less distracting.

All in all, this is a superb night vision camera if you are not as comfortable as you used to be when driving at night.

Try not to look at it too much at night in the rain as you might be a little dazzled by the light.

But you drive in super dark areas then the sub-$200 Lanmodo Vast Pro will significantly help your night time vision when driving.

Amazon’s Alexa gets a new brain on Echo, becomes smarter via AI and aims for ambience


Meet Alexa and the new Echo brain. 


Amazon is making Alexa smarter with natural turn taking, having conversations with multiple people, natural language understanding and the ability to be taught by customers. The first target is the smart home, but Alexa for Business is also likely to follow. 

The Alexa overhaul and artificial intelligence improvements were outlined as Amazon launched its latest batch of Echo devices.

Amazon’s new Echo devices are evolving to be more smart home edge computing devices. For instance, Amazon’s Echo devices are using the company’s AZ1 Neural Edge processor with 20x less power, double the speech processing and 85% lower memory usage.

That processor building block along with Amazon’s artificial intelligence advances are designed to make Echo more ambient. Dave Limp, senior vice president of devices and services at Amazon, said the new Echo devices are designed to make “moments count.”

Features such as Reading Sidekick, designed to help kids read, and conversational improvements are aimed at making Alexa more of a family member without as many “Alexa” words.

Rohit Prasad, vice president and head scientist for Alexa Artificial Intelligence at Amazon, outlined the following capabilities:

  • Alexa can take interaction cues and note errors and then connect them.
  • Learn from humans by asking follow up questions when Alexa has a gap in knowledge about returns and learned modes.
  • Deep learning space parsers to understand gaps and extract new concepts.
  • More natural conversation and adaptation.
  • Follow-up mode when interacting with humans.

Prasad noted that Alexa can use visual and acoustic cues to determine the best action to take. “This natural turn taking allows people to interact with Alexa at their own pace,” said Prasad.


Rohit Prasad teaching Alexa. 


Chasing the ambient dream starting with the home

Limp’s talk outlined the new Echo devices and Echo Show 10, but the overall theme was that these devices can follow you around the room like a person would.

In addition, Alexa is going to be more interconnected with services.

Add it up and Limp said the new neural processors are all running locally but can tell when there’s motion. Limp added that Echo devices like Show can use smart motion as well as visual cues to keep you centered.

There’s also a business use where there’s Amazon Chime and Zoom integration and the ability to handle group calls.

Today the Echo launch is all about a smarter Alexa and making her a part of your family. Rest assured, Alexa for Business is going to fast follow for next-gen working arrangements and hybrid offices.

 Amazon rolled out Alexa for Business more than a year ago and has steadily added features via AWS. Skill Blueprints were launched in April 2018 as a way to allow anyone to create skills and publish them to the Alexa Skills Store with a 2019 update.

BI is dead; long live BI

special feature

AI and the Future of Business

AI and the Future of Business

Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.

Read More

The perception of legacy enterprise business intelligence (BI) platforms comes with some legitimate stigma and baggage. It’s technology first, not business-led; the graphical user interface (GUI)-based user experience (UX) doesn’t address ease of use for all business decision-makers; there are too many underutilized reports and dashboards floating around in the enterprise; and signals produced by BI applications aren’t actionable, resulting in a disconnect between BI and tangible business outcomes. So, is enterprise BI dead? Is the end near? 

No. If I got $1,000 every time I heard the phrase “BI is dead” over my 30-plus-year career, I’d be a very rich man. I recall claims that advanced data visualization and interactive data querying will replace static reports and dashboards. They didn’t. Rather, all enterprise BI platforms built up that capability. I also vividly remember that these trends started as long as 10-plus years ago and still persist: Multiple vendors’ claims that analytics based on machine learning (ML) will push BI platforms formerly limited to descriptive and diagnostic analytics to the edge of extinction. “Should we replace our BI with AI?” was, and sometimes still is, a typical “BI is dead”-type client inquiry. That didn’t happen either. All leading BI vendors built or acquired augmented (infused with ML for predictive analytics and with conversational UI) BI functionality. 

Today, Forrester uses the terms augmented BI and analytics interchangeably, and we’re researching the intersection between augmented BI and automated machine learning (AutoML). When we introduced the systems of insight (SOI) concept back in 2018, we concluded that BI is still a key component of SOI, which supports overall insights-driven business (IDB) capabilities alongside data management and data governance. 

So, enterprise BI is not dead. It’s alive and thriving — even more so now that the workforce relies on more digital data for decisions. But there are changes ahead. What’s in store for a post-dashboard world of BI? We believe five trends will shape the post-dashboard future. BI will become more: 

  1. Pervasive and ambient. Stitched together by BI fabric, insights from BI will always be there right at your fingertips (or just a question away), seamlessly embedded in all enterprise applications — such as in enterprise resource planning, CRM, and productivity. 
  2. Actionable and effective. In the future, BI will enable business users to turn insights into actions without having to leave whatever business or productivity application they have open — a natural extension of embedded BI, plus an emerging use of BI platforms as general purpose (not just analytical, read-only) low-code enterprise applications development. 
  3. Natural-language-based and conversational. BI will achieve greater adoption in the enterprise. More business users (all end users, not just data or business analysts) will be able to benefit from BI via natural conversational interaction/UX. 
  4. Valuable. BI will provide new and deeper insights. BI augmented with ML and NoSQL technologies like search and graph analytics will empower most BI users to get insights where they previously had to rely on data and data science professionals. 
  5. Anticipatory. BI will answer questions you didn’t even think you needed to ask via insights uncovered by ML being “pushed” to you. This’ll finally start addressing the “I don’t know what I don’t know” dilemma that traditional BI applications couldn’t address. 

We’ll be researching this post-dashboard future of BI and the current and emerging technologies that will turn these five trends into reality, so please stay tuned. And don’t forget to sign up and attend our upcoming Data Strategy & Insights 2020 virtual event coming up on October 13–15.  

This post was written by VP, Principal Analyst Boris Evelson, and it originally appeared here.

Apple Watch Series 6 first look: New colors, blood oxygen sensor, and improved internals

Last week Apple debuted the new Apple Watch Series 6 and since I continue my search for the holy grail of wearables I placed an order for a new (PRODUCT)RED Apple Watch Series 6. At first glance it doesn’t seem much different than the Apple Watch Series 5, but there’s more than meets the eye.

Apple Event

Regular readers know I’m a sucker for buying colorful watches and phones so when I saw the new navy blue and red aluminum watch offerings I knew one of them had to be mine. While blue is one of my favorite colors, I couldn’t find one readily available and since I already purchased a (PRODUCT)RED Apple iPhone SE, I selected the red Apple Watch Series 6. These new colors are attractive and the one visually distinguishing difference between the Series 5 and Series 6.

Also: Apple Watch Series 5 review: This is the watch I’ve been waiting for

Turn on the display of the Apple Watch Series 6 and you may also note the always-on retina display is 2.5 times brighter than the Series 5. At first I thought the full watch face was lit, but it was just the always-on display. Inside the new watch has the Apple S6 chip that Apple claims is 20% faster than the S5 and is more energy efficient. We’ll have to see how the watch does over time to gauge the battery life.

Speaking of battery life, one software improvement is the support for an official Apple sleep application. We’ve seen third party solutions in the past, but Apple now officially supports sleep tracking and it is well integrated with the alarm on the watch. The Apple Watch Series 6 also reportedly charges up faster than the Series 5 so after a night of sleep you can slap the watch on the charger and top it off before heading out to work.

Setup of sleep also includes a note that Apple will warn you if there is not adequate remaining battery to track your sleep. Last night was my first sleeping with the Apple Watch Series 6 and I had the new Fitbit Sense strapped to the other arm as a comparison. The Fitbit Sense and Fitbit ecosystem provides far more detail on the data captured during sleep, making the Apple Watch pretty useless for sleep data. After seeing just the time and a basic graph of the data from the Apple Watch, it’s better to just charge it up at night than use it for sleep until Apple enhances the sleep analysis area of Apple Health. 

Top ZDNET Reviews

Sleep is buried down in the data too so it looks like Apple rushed the functionality to market. I expected to see a dedicated Apple Sleep app for the iPhone that would show you REM sleep, deep sleep, blood oxygen levels, resting heart rate, and more, but none of that is readily apparent.

Also: Best smartwatches in 2020: Apple and Samsung battle for a spot on your wrist

Health and wellness is important for wearable makers today as the world deals with the global coronavirus pandemic. We have seen pulse oximeters, aka blood oxygen or SpO2 sensors, on wearables for years from companies like Garmin, Fitbit, and Coros. In the beginning the measure was focused on tracking conditions of athletes in high altitude situations and then evolved to tracking at night while someone is sleeping. Today, Garmin and Apple blood oxygen sensors can measure this information all day long.

I tested the Apple Watch Series 6 against the Garmin Forerunner 745 and saw results within 2% of each other. Last night I wore the Apple Watch Series 6 and Fitbit Sense with blood oxygen levels also within 2% of each other.

In typical Apple fashion, the blood oxygen sensing app is very well done on the Apple Watch. Cool animations and a large number countdown help ensure you stay still while measuring your blood oxygen levels with easy-to-read results. In Apple’s press release for the new Apple Watch Series 6 it states that is is working closely with various medical facilities to continue to study and understand how blood oxygen measurements can help with health management. The key is tracking the trends and understanding that none of these watches are medical devices.

Also: Fitbit Sense review: Advanced health and wellness tracking, GPS, and coaching

To begin testing the GPS sports watch capability of the Apple Watch, I ran with it this morning to see how the new always-on altimeter performed. The watch can now detect small elevation changes, up to as small as 1 foot. It can be shown as a workout metric or watch face complication.

As expected, the Apple Watch performed well as a GPS watch, but I’ll need to use another app if I am going to run with it since the Apple Workout app is very basic and doesn’t provide the glanceable information I want to see while running. I’ll have to spend more time with this app and then also explore the Strava app as a possible replacement.

Other health features of watchOS 7 include VO2 Max as a more visible metric (was hidden in Apple Health), handwashing detection, and new workout types. These new workouts include Core Training, Dance, Functional Strength Training, and Cooldown.

I’ve only had one day and one night with the new Apple Watch Series 6 so I’ll be running, biking, hiking, sleeping, and more over the next couple of weeks before posting the in-depth review. I’ll be checking out watchOS 7 in detail, tracking my blood oxygen levels, seeing how sleep tracking stacks up with the Fitbit Sense, and more. If you have any specific questions or things you want me to test out, please leave a comment below or send me a message on Twitter.

Accenture misses Q4 revenue, EPS targets

Accenture reported a decline in earnings and revenue for the fiscal fourth quarter, citing the loss of reimbursable travel costs. The tech services firm said its Q4 net income was $1.12 billion, with non-GAAP earnings of $1.70 cents a share on revenue of $10.8 billion.

Wall Street was expecting Accenture to report earnings of $1.73 a share on revenue of $10.91 billion. Shares of Accenture were down by over 5% in early trading. 

Meanwhile, the company said consulting revenues for the quarter were down 8% to $5.68 billion. Outsourcing revenues came to $5.15 billion.

For the year, Accenture said earnings came to $7.89 per share on revenue of $44.3 billion, up from $43.2 billion in fiscal 2019. Fiscal 2020 consulting revenues were $24.2 billion, while outsourcing revenues were $20.1 billion.

“Our ability to pivot rapidly to meet the needs of our clients and new ways of operating is reflected in our record new bookings of $50 billion for fiscal 2020,” said Accenture CEO Julia Sweet. “We also continued to deliver revenue growth ahead of the market as well as strong profitability and superior cash flow. As we turn the page to fiscal 2021, we are better positioned than ever to continue gaining market share and delivering tangible value for our clients and shared success for all our stakeholders.”

For the outlook, Accenture is predicting a revenue of $11.15 billion to $11.55 billion, a decrease of 3% to flat, for the first fiscal quarter of 2021. Wall Street expects Accenture to report Q1 earnings of $2.09 a share on revenue of $11.52 billion. For fiscal 2021, the company expects diluted EPS to be in the range of $7.80 to $8.10, below analyst estimates for $8.13 a share. 

Tech Earnings

Mobile security: These seven malicious apps have been downloaded by 2.4m Android and iPhone users

Almost two and a half million Android and iPhone users downloaded seven adware apps from the Google Play Store and Apple App Store, according to research by a cybersecurity company.

Many of the apps were being promoted via TikTok and Instagram accounts – one of which had over 300,000 followers. Detailed by cybersecurity researchers at Avast, the apps have been brought to the attention of Apple and Google.

The apps themselves are all relatively simple – prank applications to ‘shock’ friends, music downloaders and wallpaper apps, but they all aggressively display pop-ups which either outright charge users for using additional functions, or display adverts that take up the entire screen, requiring users to click on them to remove them. Both schemes generate revenue for those behind the apps.

One of of the ways the apps have managed to bypass security protections of official Android and Apple app stores is because they’re HiddenAds trojans, which while appearing legitimate to app store protections, push malicious functionalities from outside the application.

SEE: Cybersecurity: Let’s get tactical (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)

That means the activity only emerges once the app has been installed by the user and the permissions provided enable the app to receive instructions from outside the app – which in this case is to display intrusive adverts and demand individual charges of up to $8 from users.

“The apps we discovered are scams and violate both Google’s and Apple’s app policies by either making misleading claims around app functionalities, or serving ads outside of the app and hiding the original app icon soon after the app is installed,” said Jakub Vávra, threat analyst at Avast.  

The apps that have been removed from Google Play include ThemeZone – Shawky App Free – Shock My Friends, Ulimate Music Downloader – Free Download Music. Another set of apps including Shock My Friends – Satuna, 666 Time, ThemeZone – Live Wallpapers and shock my friend tap roulette v are no longer available from the Apple App store in the UK.

While adware, malware and other malicious apps can be difficult to identify, one way users can protect themselves is by not installing them in the first place and by carefully reading reviews of apps because low reviews and complains about functionality or excess charges could indicate something is wrong.

Users should also be wary of apps which charge excessive amounts for basic features as it’s likely a sign that something isn’t right, while it’s also a good idea to check the permissions the app asks for, because asking for excessive access to the device could also be a sign that something isn’t right.

The researchers note that one of the apps requests access to a device’s external storage, which can include photos, videos, and files, depending on how the storage is used. “Accessing external storage is not a must for a wallpaper app,” said Vávra.

“So rather than just tapping “Allow,” the next time a new app asks for certain permissions, take a minute to think about whether or not it really needs that access. Does a weather app need to access your microphone? Nope. Does a wallpaper app need to access your storage? Nope. That’s a sign the app is likely a scam,” he added.

Google told ZDNet that the offending apps have been removed from the store – although ZDNet has informed Google that at the time of writing one remains. Apple hasn’t responded to a request for comment.