Garmin Speak is a GPS Navigation Unit with Amazon Alexa Built-in

Similar to how Google has the Google Assistant SDK, Amazon has its own SDK for its Alexa service that allows 3rd-party companies to embed it into their products. We’ve seen this implemented in a wide range of products including connected speakers, refrigerators, and more. Today, Garmin has announced a new product called the Garmin Speak which combines the company’s GPS navigation system with all of the capabilities of an Amazon Alexa device.

Amazon recently held an Alexa hardware launch event and unveiled a number of new Echo products but their focus was put on the home. Garmin has a number of products for various aspects of your life but the company’s GPS navigation system has been popular for decades. Many have shifted to using their smartphones for turn by turn navigation thanks to the added desire to consolidate devices, but Garmin hasn’t given up in this field.

Garmin has just announced the Garmin Speak which is a small GPS navigation unit for your automobile that has Amazon Alexa built into it. The product is priced at $150 and currently on sale at select retailers including Best Buy and Amazon. The inch and a half device comes with a 114 x 64 pixel resolution OLED display with a faint LED ring around the OLED panel. It comes with a power cable and has a speaker built-into the unit, but you can buy an AUX cable so it connects to your car’s stereo system.

Since it has Amazon Alexa built into it, you can issue a ton of commands to the device using only our voice. This can be anything from streaming music to listening to the news or weather forecast, or turning on/off select smartphone products that you forgot to on your way out the door. With Garmin’s focus being on GPS navigation, you can ask for directions by using the “Alexa, ask Garmin for directions to…” command.


Source: Garmin Newsroom

WhatsApp Adds Live Location Sharing

It’s fair to say that WhatsApp is the most popular messaging service in the whole world. The company, acquired by Facebook in 2014, has recently said that it has 2 billion monthly active users, and 1 billion active daily users. Other mind-boggling statistics such as a total of 55 billion messages being sent every day have also been promoted by the company. This is the reason why, when WhatsApp adds a new feature, it has the capability to have an impact on the experiences of billions of users around the globe. In the past months, we have seen the introduction of WhatsApp Status and the ability to send files of any file type. Today, the company has added a new feature: Live Location – which is WhatsApp’s take on live location sharing.

Location sharing has existed in WhatsApp since quite a few years, and it has worked well, integrating with Google Maps to send the user’s location in a personal or a group chat.

However, Live Location is a step above and beyond static location sharing. Live location sharing, now used by Google, Facebook and other apps, means that the person you are having a conversation with will be able to see your precise location on a live map which will constantly update for a fixed time determined beforehand.

According to WhatsApp, the procedure to initiate live location sharing consists of:

  • Open a personal or group chat with whom you want to share your live location.
  • Under “Location” in the attach button, tap on the new “Share Live Location” button.
  • You can choose for how long you want to share. For example, choose share for one hour and then tap send.
  • If it’s a group chat, each person in the chat will be able to see your real-time location on a map. If multiple people share their Live Location in the group, all locations will be visible on the same map.

This feature has well-defined use cases for commuting and keeping track of family and friends, and it is safe to expect that it will be adopted by millions of users. In terms of security and privacy, WhatsApp stated that Live Location is end-to-end encrypted just like WhatsApp chats, voice calls and video calls.

Live Location will roll out on the Android app in the coming weeks.


Source:
WhatsApp

DxO One Camera for Android is Coming Soon

At this point, DxO Labs’ is a company that has been in the news for a while. The DxOMark camera suite used for evaluating camera quality has proved extremely popular with smartphone OEMs, to the point where it is now expected that they will consult with DxO Labs in an attempt to get a good score on DxOMark, and then promote the same fact in their smartphone launch. We have seen questions rise over the inherent credibility of DxOMark scores. But in this debate, it’s far too easy to forget that DxOMark is not the only product offered by DxO Labs. The company also sells their One camera for the iPhone, and now they have announced that the DxO One camera for Android is coming soon.

The DxO One camera attachment for iPhone was released in June 2015. It consists of a 20MP 1-inch sensor (similar to the Sony RX100 series) with a max aperture of f/1.8 in a pocketable design as it doesn’t have a primary display of its own. It has a small onboard OLED display and it relies on the iPhone’s display for the viewfinder in order to frame photos. It connects to the iPhone via the Lightning port. As it has a 1-inch sensor, it can take better photos than ordinary sensors found in smartphones in many cases, which range in size from 1/2.5-inch to 1/3.1-inch. It also supports RAW photography via Super RAW.

Now, the same camera attachment hardware release for Android has been given the “coming soon” status by DxO Labs. It will connect to Android phones via the now ubiquitous USB-C port, which is found in every Android flagship smartphone as well as being found on most mid-range and budget Android smartphones.

Customers can sign up to find more about DxO One camera for Android, and more details are scheduled to be released on November 2. The camera will be open to “Early Access” users – a program which will be available to the general public, which will start in the coming weeks. A companion version 1.0 of the DxO One mobile app will be released as well. The company said that it planned to use the “Early Access” program to refine the experience before shipping the product with general availability.


Source: DxO Labs

Lawnchair Launcher is Now on the Google Play Store

Lawnchair Launcher, the customisable Pixel Launcher, has now launched on the Google Play Store. Lawnchair comes with a wide range of features not found in the stock Pixel Launcher. This includes icon support, experimental features, notification badges, the Google Now Panel and more. The only caveat is that it has been released without the Google Now Panel on the Play Store. This is because to enable the Google Now Panel, an application needs to be in debugging mode, and debugging mode applications can’t be uploaded to the Google Play Store. To work around this, the developer of Lawnchair has released “Lawnfeed”, which is a separately installed APK enabling the Google Now panel. This works the exact same way as Nova Launcher if you have ever enabled the Google Now Panel in that. Simply install it and the launcher will then have fully unlocked functionality. Over on XDA TV, we have also done a video review of the application, showcasing features and the ease of use of the launcher!

You can check out Lawnchair on the Play Store now, which some may think is strange given it’s essentially a modified version of the Google Pixel launcher. With a wide range of features taken from the Google Pixel, along with others that are experimental in nature, it’s a pleasant surprise that we can find it on the Play Store. For those looking for a free and simple alternative to the paid launcher applications such as Nova Launcher, then give Lawnchair a try. You’ll get updates through the Play Store, so you won’t have to try to follow the XDA thread for new updates to the application! Check it out down below!

Lawnchair Launcher (Unreleased) (Free, Google Play) →


 

Google Discusses the Tech Used for Portrait Mode on the Pixel 2

Among other trends in the smartphone industry right now, we’re seeing more OEMs start to put two cameras on the back of their device for a number reasons. Some are using it for the bokeh effect and “portrait mode” shots while others are using a black and white sensor (to try and improve overall picture quality), wide-angle lenses or a telephoto solution for improved zooming. The bokeh effect is what the typical “portrait mode” feature uses but Google has been able to do this with the Pixel 2 and the Pixel 2 XL without even needing to have two camera sensors (rather, they use a “dual pixel” approach at the hardware level).

As smartphone hardware is starting to stagnate and hit a plateau in some key areas, we’re going to need to see innovation at the software level. This is exactly what Google has been doing lately with their applications, since they’ve been leveraging the machine learning technology that they have been working on and trying to bring to their products for the past few years. At the Pixel 2 and Pixel 2 XL launch event, Google CEO Sundar Pichai said the company was shifting from being mobile-first to being an AI-first company, and we’re seeing the results of this strategy right now.

pixel 2 mask profile

In a new post over on the Google Research Blog, they have shared some details about how they’re able to emulate the bokeh effect on their new phones without using two cameras. There are two ways that this is done since the phones have two different types of camera technology. With the tech used in the rear camera sensor, it can actually create a depth map but the front-facing camera doesn’t have this ability, so this is where the machine learning technology comes into play.

With the back camera of the Pixel 2 and the Pixel 2 XL, they’re able to utilize the Phase-Detect Auto-Focus (PDAF) pixels (which is sometimes called dual-pixel autofocus) technology of the camera sensor they chose. They explain this by having you imagine the rear camera sensor is split in half with one side seeing the image in one way while the other half of the sensor sees the image slightly differently. While these viewpoints are less than 1mm apart, it’s enough for Google to create a solid depth map.

The front camera of Google’s new smartphones does not feature this same type of technology though. Thanks to machine learning and computational photography techniques that they have been training and improving on lately, they’re able to produce a segmentation mask of what they feel is the subject of the photo. So after the mask is created, Google is then able to blur the background while keeping the subject of the photo approximately intact.

Follow the link below to read the entire Google Research Blog entry and learn more!


Source: Google Research Blog