hardware

Mid-range flagships like the Honor 20 Pro are giving premium phones a run for their money

Posted by | hardware, Honor, huawei, Mobile, smartphones | No Comments

Phone sales have been trending downward for some time now. There are a number of reasons for this — many of which you can read about in this piece I published last week. The creeping cost of premium handsets is pretty high on that list, with flagships now routinely topping $1,000 from many of the big names.

The big smartphone makers have begun to react to this, with budget flagship alternatives like the iPhone XR, Galaxy S10e and Pixel 3a. A new crop of mid-range flagships, however, are giving them a run for their money and serving as an important reminder that a quality handset doesn’t need to be priced in the four digits.

The Honor 20 Pro fits nicely in the latter camp, joining the likes of the recently announced OnePlus 7 Pro and Asus ZenFone 6 in demonstrating that premium specs can still be had for what was once considered a reasonable flagship price.

Of course, before we get into specifics of pricing with the newly announced handset, it bears mentioning whether Honor, a brand owned by Huawei, will actually ever make it to the States. That’s all pretty complicated — like Donald Trump in a trade war with with China complicated. The pricing on the London-launched Pro version is €599, putting it at around $670.

The phone’s got Huawei’s latest and greatest Kirin 980 processor, coupled with a 6.26-inch display with hole-punch cutout and a quartet of rear-facing cameras. Those include a wide angle with 117-degree shots, 48-megapixel main, telephoto and a macro, which is an interesting addition to the standard array. The Pro’s out at some point in the June or July time frame.

Huawei bans aside, it will be interesting to see how this new crop of more affordable premium devices impacts the rest of the big names up top.

Powered by WPeMatico

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Posted by | Gadgets, hardware, robotics, science, stanford, Stanford University | No Comments

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Powered by WPeMatico

Why is Facebook doing robotics research?

Posted by | artificial intelligence, Facebook, Gadgets, hardware, robotics, robots, science, Social, TC | No Comments

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy,” the hexapod robot

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the auto-didactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Powered by WPeMatico

This clever transforming robot flies and rolls on its rotating arms

Posted by | drones, Gadgets, hardware, robotics, science, TC, UAVs | No Comments

There’s great potential in using both drones and ground-based robots for situations like disaster response, but generally these platforms either fly or creep along the ground. Not the “Flying STAR,” which does both quite well, and through a mechanism so clever and simple you’ll wish you’d thought of it.

Conceived by researchers at Ben-Gurion University in Israel, the “flying sprawl-tuned autonomous robot” is based on the elementary observation that both rotors and wheels spin. So why shouldn’t a vehicle have both?

Well, there are lots of good reasons why it’s difficult to create such a hybrid, but the team, led by David Zarrouk, overcame them with the help of today’s high-powered, lightweight drone components. The result is a robot that can easily fly when it needs to, then land softly and, by tilting the rotor arms downwards, direct that same motive force into four wheels.

Of course you could have a drone that simply has a couple of wheels on the bottom that let it roll along. But this improves on that idea in several ways. In the first place, it’s mechanically more efficient because the same motor drives the rotors and wheels at the same time — though when rolling, the RPMs are of course considerably lower. But the rotating arms also give the robot a flexible stance, large wheelbase and high clearance that make it much more capable on rough terrain.

You can watch FSTAR fly, roll, transform, flatten and so on in the following video, prepared for presentation at the IEEE International Convention on Robotics and Automation in Montreal:

The ability to roll along at up to 8 feet per second using comparatively little energy, while also being able to leap over obstacles, scale stairs or simply ascend and fly to a new location, give FSTAR considerable adaptability.

“We plan to develop larger and smaller versions to expand this family of sprawling robots for different applications, as well as algorithms that will help exploit speed and cost of transport for these flying/driving robots,” said Zarrouk in a press release.

Obviously at present this is a mere prototype, and will need further work to bring it to a state where it could be useful for rescue teams, commercial operations and the military.

Powered by WPeMatico

The state of the smartphone

Posted by | Apple, hardware, Mobile, Samsung, smartphone | No Comments

Earlier this month, Canalys used the word “freefall” to describe its latest reporting. Global shipments fell 6.8% year over year. At 313.9 million, they were at their lowest level in nearly half a decade.

Of the major players, Apple was easily the hardest hit, falling 23.2% year over year. The firm says that’s the “largest single-quarter decline in the history of the iPhone.” And it’s not an anomaly, either. It’s part of a continued slide for the company, seen most recently in its Q1 earnings, which found the handset once again missing Wall Street expectations. That came on the tail of a quarter in which Apple announced it would no longer be reporting sales figures.

Tim Cook has placed much of the iPhone’s slide at the feet of a disappointing Chinese market. It’s been a tough nut for the company to crack, in part due to a slowing national economy. But there’s more to it than that. Trade tensions and increasing tariffs have certainly played a role — and things look like they’ll be getting worse before they get better on that front, with a recent bump from a 10 to 25% tariff bump on $60 billion in U.S. goods.

It’s important to keep in mind here that many handsets, regardless of country of origin, contain both Chinese and American components. On the U.S. side of the equation, that includes nearly ubiquitous elements like Qualcomm processors and a Google-designed operating system. But the causes of a stagnating (and now declining) smartphone market date back well before the current administration began sowing the seeds of a trade war with China.

Image via Miguel Candela/SOPA Images/LightRocket via Getty Images

The underlying factors are many. For one thing, smartphones simply may be too good. It’s an odd notion, but an intense battle between premium phone manufacturers may have resulted in handsets that are simply too good to warrant the long-standing two-year upgrade cycle. NPD Executive Director Brad Akyuz tells TechCrunch that the average smartphone flagship user tends to hold onto their phones for around 30 months — or exactly two-and-a-half years.

That’s a pretty dramatic change from the days when smartphone purchases were driven almost exclusively by contracts. Smartphone upgrades here in the States were driven by the standard 24-month contract cycle. When one lapsed, it seemed all but a given that the customer would purchase the latest version of the heavily subsidized contract.

But as smartphone build quality has increased, so too have prices, as manufacturers have raised margins in order to offset declining sales volume. “All of a sudden, these devices became more expensive, and you can see that average selling price trend going through the roof,” says Akyuz. “It’s been crazy, especially on the high end.”

Powered by WPeMatico

Asus’ $499 ZenFone 6 has a flip-up camera and a giant battery

Posted by | asus, hardware, Mobile, smartphones | No Comments

Premium smartphone manufacturers have moved the needle on pricing, but 2019 may well go down as a kind of golden age for budget flagships. Apple, Google and Samsung are all in that business now, and OnePlus has once again shown the world how to offer more for less. And then there’s the new ZenFone.

It’s a bit of an understatement to suggest that Asus has had trouble breaking into the smartphone space. And things aren’t likely to get any easier as the market further consolidates among the top five players. But you’ve got to hand it to the company for swinging for the fences with the $499 ZenFone 6.

First thing’s first. Like the excellent OnePlus 7 Pro, the phone (fone?) forgoes the notch and hole punch, instead opting for a clever pop-up that flips up from the back. That means one camera is doing double duty, toggling between the front and rear with the push of an on-screen button. Like the OnePlus, there’s built-in fall detection that retracts the camera if it slips from your hand.

5000 > 1+7+3700, so why choose ordinary when you can #DefyOrdinary? #ZenFone6 pic.twitter.com/x8R24953mS

ASUS (@ASUS) May 14, 2019

That whole dealie would be enough to help the phone stand out in a world of similar handsets, but this is a solid budget handset through and through. Inside is a bleeding-edge Snapdragon 855, coupled with a beefy 5,000 mAh battery. The new ZenFone also sports a headphone jack, because it’s 2019 and rules don’t apply to smartphones anymore.

Is that all enough to right the ship? Probably not, but it’s nice to see Asus stepping up with a compelling product at an even more compelling price point. More information on the phone’s U.S. release should be arriving soon.

Powered by WPeMatico

ObjectiveEd is building a better digital curriculum for vision-impaired kids

Posted by | accessibility, Apps, Blindness, Education, Gadgets, Gaming, hardware, objectiveed, TC, visual impairment, visually impaired | No Comments

Children with vision impairments struggle to get a solid K-12 education for a lot of reasons — so the more tools their teachers have to impart basic skills and concepts, the better. ObjectiveEd is a startup that aims to empower teachers and kids with a suite of learning games accessible to all vision levels, along with tools to track and promote progress.

Some of the reasons why vision-impaired kids don’t get the education they deserve are obvious, for example that reading and writing are slower and more difficult for them than for sighted kids. But other reasons are less obvious, for example that teachers have limited time and resources to dedicate to these special needs students when their overcrowded classrooms are already demanding more than they can provide.

Technology isn’t the solution, but it has to be part of the solution, because technology is so empowering and kids take to it naturally. There’s no reason a blind 8-year-old can’t also be a digital native like her peers, and that presents an opportunity for teachers and parents both.

This opportunity is being pursued by Marty Schultz, who has spent the last few years as head of a company that makes games targeted at the visually impaired audience, and in the process saw the potential for adapting that work for more directly educational purposes.

“Children don’t like studying and don’t like doing their homework,” he told me. “They just want to play video games.”

It’s hard to argue with that. True of many adults too, for that matter. But as Schultz points out, this is something educators have realized in recent years and turned to everyone’s benefit.

“Almost all regular education teachers use educational digital games in their classrooms and about 20% use it every day,” he explained. “Most teachers report an increase in student engagement when using educational video games. Gamification works because students own their learning. They have the freedom to fail, and try again, until they succeed. By doing this, students discover intrinsic motivation and learn without realizing it.”

Having learned to type, point and click, do geometry and identify countries via games, I’m a product of this same process, and many of you likely are as well. It’s a great way for kids to teach themselves. But how many of those games would be playable by a kid with vision impairment or blindness? Practically none.

Held back

It turns out that these kids, like others with disabilities, are frequently left behind as the rising technology tide lifts everyone else’s boats. The fact is it’s difficult and time-consuming to create accessible games that target things like Braille literacy and blind navigation of rooms and streets, so developers haven’t been able to do so profitably and teachers are left to themselves to figure out how to jury-rig existing resources or, more likely, fall back on tried and true methods like printed worksheets, in-person instruction and spoken testing.

And because teacher time is limited and instructors trained in vision-impaired learning are thin on the ground, these outdated methods are also difficult to cater to an individual student’s needs. For example a kid may be great at math but lack directionality skills. You need to draw up an “individual education plan” (IEP) explaining (among other things) this and what steps need to be taken to improve, then track those improvements. It’s time-consuming and hard! The idea behind ObjectiveEd is to create both games that teach these basic skills and a platform to track and document progress as well as adjust the lessons to the individual.

How this might work can be seen in a game like Barnyard, which like all of ObjectiveEd’s games has been designed to be playable by blind, low-vision or fully sighted kids. The game has the student finding an animal in a big pen, then dragging it in a specified direction. The easiest levels might be left and right, then move on to cardinal directions, then up to clock directions or even degrees.

“If the IEP objective is ‘Child will understand left versus right and succeed at performing this task 90% of the time,’ the teacher will first introduce these concepts and work with the child during their weekly session,” Schultz said. That’s the kind of hands-on instruction they already get. “The child plays Barnyard in school and at home, swiping left and right, winning points and getting encouragement, all week long. The dashboard shows how much time each child is playing, how often, and their level of success.”

That’s great for documentation for the mandated IEP paperwork, and difficulty can be changed on the fly as well:

“The teacher can set the game to get harder or faster automatically, or move onto the next level of complexity automatically (such as never repeating the prompt when the child hesitates). Or the teacher can maintain the child at the current level and advance the child when she thinks it’s appropriate.”

This isn’t meant to be a full-on K-12 education in a tablet app. But it helps close the gap between kids who can play Mavis Beacon or whatever on school computers and vision-impaired kids who can’t.

Practical measures

Importantly, the platform is not being developed without expert help — or, as is actually very important, without a business plan.

“We’ve developed relationships with several schools for the blind as well as leaders in the community to build educational games that tackle important skills,” Schultz said. “We work with both university researchers and experienced Teachers of Visually Impaired students, and Certified Orientation and Mobility specialists. We were surprised at how many different skills and curriculum subjects that teachers really need.”

Based on their suggestions, for instance, the company has built two games to teach iPhone gestures and the accessibility VoiceOver rotor. This may be a proprietary technology from Apple, but it’s something these kids need to know how to use, just like they need to know how to run a Google search, use a mouse without being able to see the screen, and other common computing tasks. Why not learn it in a game like the other stuff?

Making technological advances is all well and good, but doing so while building a sustainable business is another thing many education startups have failed to address. Fortunately, public school systems actually have significant money set aside specifically for students with special needs, and products that improve education outcomes are actively sought and paid for. These state and federal funds can’t be siphoned off to use on the rest of the class, so if there’s nothing to spend them on, they go unused.

ObjectiveEd has the benefit of being easily deployed without much specialty hardware or software. It runs on iPads, which are fairly common in schools and homes, and the dashboard is a simple web one. Although it may eventually interface with specialty hardware like Braille readers, it’s not necessary for many of the games and lessons, so that lowers the deployment bar as well.

The plan for now is to finalize and test the interface and build out the games library — ObjectiveEd isn’t quite ready to launch, but it’s important to build it with constant feedback from students, teachers and experts. With luck, in a year or two the visually-impaired youngsters at a school near you might have a fun new platform to learn and play with.

“ObjectiveEd exists to help teachers, parents and schools adapt to this new era of gamified learning for students with disabilities, starting with blind and visually impaired students,” Schultz said. “We firmly believe that well-designed software combined with ‘off-the-shelf’ technology makes all this possible. The low cost of technology has truly revolutionized the possibilities for improving education.”

Powered by WPeMatico

The Meizu 16s offers flagship features at a mid-range price

Posted by | Android, Gadgets, hardware, Meizu, mobile phones, Qualcomm, smartphone, smartphones, snapdragon 855, TC, United States | No Comments

Smartphones have gotten more expensive over the last few years even though there have only been a handful of recent innovations that really changed the way you interact with the phone. It’s maybe no surprise then that there is suddenly a lot more interest in mid-range, sub-$500 phones again. In the U.S., Google’s new Pixel 3a, with its superb camera, is bringing a lot of credibility to this segment. Outside the U.S., though, you can often get a flagship phone for less than $500 that makes none of the trade-offs typically associated with a mid-range phone. So when Meizu asked me to take a look at its new 16s flagship, which features (almost) everything you’d expect from a high-end Android phone, I couldn’t resist.

Meizu, of course, is essentially a total unknown in the U.S., even though it has a sizable global presence elsewhere. After a week with its latest flagship, which features Qualcomm’s latest Snapdragon 855 chip and under-screen fingerprint scanner, I’ve come away impressed by what the company delivers, especially given the price point. In the U.S. market, the $399 Pixel 3a may seem like a good deal, but that’s because a lot of brands like Meizu, Xiaomi, Huawei and others have been shut out.

It’s odd that this is now a differentiating feature, but the first thing you’ll notice when you get started is the notchless screen. The dual-sim 16s must have one of the smallest selfie cameras currently on the market, and the actual bezels, especially when compared to something like the Pixel 3a, are minimal. That trade-off works for me. I’ll take a tiny bezel over a notch any day. The 6.2-inch AMOLED screen, which is protected by Gorilla Glass, is crisp and bright, though maybe a bit more saturated than necessary.

The in-display fingerprint reader works just fine, though it’s a bit more finicky that the dedicated readers I’ve used in the past.

With its 855 chip and 6GB of RAM, it’s no surprise the phone feels snappy. To be honest, that’s true for every phone, though, even in the mid-range. Unless you are a gamer, it’s really hard to push any modern phone to its limits. The real test is how this speed holds up over time, and that’s not something we can judge right now.

The overall build quality is excellent, yet while the plastic back is very pretty, it’s also a) weird to see a plastic back to begin with and b) slippery enough to just glide over your desk and drop on the floor if it’s at even a slight angle.

Meizu’s Flyme skin does the job, and adds some useful features like a built-in screen recorder. I’m partial to Google’s Pixel launcher, and a Flyme feels a bit limited in comparison to that and other third-party launchers. There is no app drawer, for example, so all of your apps have to live on the home screen. Personally, I went to the Microsoft Launcher pretty quickly, since that’s closer to the ecosystem I live in anyway. Being able to do that is one of the advantages of Android, after all.

Meizu also offers a number of proprietary gesture controls that replace the standard Android buttons. These may or may not work for you, depending on how you feel about gesture-based interfaces.

I haven’t done any formal battery tests, but the battery easily lasted me through a day of regular usage.

These days, though, phones are really about the cameras. Meizu opted for Sony’s latest 48-megapixel sensor here for its main camera and a 20-megapixel sensor for its telephoto lens that provides up to 3x optical zoom. The camera features optical image stabilization, which, when combined with the software stabilization, makes it easier to take low-light pictures and record shake-free video (though 4K video does not feature Meizu’s anti-shake system).

While you can set the camera to actually produce a 48-megapixel image, the standard setting combines four pixels’ worth of light into a single pixel. That makes for a better image, though you do have the option to go for the full 48 megapixels if you really want to. The camera’s daytime performance is very good, though maybe not quite up to par with some other flagship phones. It really shines when the light dims, though. At night, the camera is highly competitive and Meizu knows that, so the company even added two distinct night modes: one for handheld shooting and one for when you set the phone down or use a tripod. There is also a pro mode with manual controls.

Otherwise, the camera app provides all the usual portrait mode features you’d expect today. The 2x zoom works great, but at 3x, everything starts feeling a bit artificial and slightly washed out. It’ll do in a pinch, but you’re better off getting closer to your subject.

In looking at these features, it’s worth remembering the phone’s price. You’re not making a lot of trade-offs at less than $500, and it’d be nice to see more phones of this caliber on sale in the U.S. Right now, it looks like the OnePlus 7 Pro at $669 is your best bet if you are in the U.S. and looking for a flagship phone without the flagship price.

Powered by WPeMatico

Samsung’s 5G phone hits Verizon, Sprint getting two 5G devices this month

Posted by | hardware, HTC, LG, Mobile, Samsung, smartphones, sprint, TC, Verizon | No Comments

With 5G, when it rains, it pours. A few hours after Verizon officially started selling the Samsung Galaxy S10 5G, Sprint announced that it will be offering two 5G devices for its network by the end of the month.

For now, it still feels like manufacturers are putting the cart before the horse here. There’s little question that 5G will become ubiquitous in the next few years, but actual opportunities to access the technology are still pretty scarce.

Among U.S. carriers, Verizon (our parent company’s parent company) has been the most aggressive. Fitting then, that the company is first to market with the Galaxy S10 5G. Of course, all of these devices will default to 4G when there’s no 5G to be found, which is going to be the case more often than not for a while.

Verizon’s 5G is currently available in select markets, including Chicago and Minneapolis. That number is set to balloon to 20 locales before year’s end, including, Atlanta, Boston, Charlotte, Cincinnati, Cleveland, Columbus, Dallas, Des Moines, Denver, Detroit, Houston, Indianapolis, Kansas City, Little Rock, Memphis, Phoenix, Providence, San Diego, Salt Lake City and Washington, DC.

Sprint, meanwhile, has promised to flip on 5G in nine markets “in the coming weeks.” The list includes parts of Atlanta, Dallas, Houston and Kansas City, and then locations in Los Angeles, New York City, Phoenix and Washington, D.C.

To celebrate, the network will be offering two 5G devices this month. The LG V50 ThinQ and HTC 5G Hub will hit Sprint stores on May 31.

Powered by WPeMatico

SpaceX kicks off its space-based internet service tomorrow with 60-satellite Starlink launch

Posted by | Gadgets, hardware, Space, SpaceX | No Comments

As wild as it sounds, the race is on to build a functioning space internet — and SpaceX is taking its biggest step yet with the launch of 60 (!) satellites tomorrow that will form the first wave of its Starlink constellation. It’s a hugely important and incredibly complex launch for the company — and should be well worth launching.

A Falcon 9 loaded to the gills with the flat Starlink test satellites (they’re “production design” but not final hardware) is vertical at launchpad 40 in Cape Canaveral. It has completed its static fire test and should have a window for launch tomorrow, weather permitting.

Building satellite constellations hundreds or thousands strong is seen by several major companies and investors as the next major phase of connectivity — though it will take years and billions of dollars to do so.

OneWeb, perhaps SpaceX’s biggest competitor in this area, just secured $1.25 billion in funding after launching the first six satellites in March (of a planned 650). Jeff Bezos has announced that Amazon will join the fray with the proposed 3,236-satellite Project Kuiper. Ubiquitilink has a totally different approach. And plenty of others are taking on smaller segments, like lower-cost or domain-specific networks.

Needless to say it’s an exciting sector, but today’s launch is a particularly interesting one because it is so consequential for SpaceX. If this doesn’t go well, it could set Starlink’s plans back long enough to give competitors an edge.

The satellites stacked inside the Falcon 9 payload fairing. “Tight fit,” pointed out CEO Elon Musk.

SpaceX hasn’t explained exactly how the 60 satellites will be distributed to their respective orbits, but founder and CEO Elon Musk did note on Twitter that there’s “no dispenser.” Of course there must be some kind of dispenser — these things aren’t going to just jump off of their own accord. They’re stuffed in there like kernels on a corncob, and likely each have a little spring that sends them out at a set velocity.

A pair of prototype satellites, Tintin-A and B, have been in orbit since early last year, and have no doubt furnished a great deal of useful information to the Starlink program. But the 60 aboard tomorrow’s launch aren’t quite final hardware. Although Musk noted that they are “production design,” COO Gwynne Shotwell has said that they are still test models.

“This next batch of satellites will really be a demonstration set for us to see the deployment scheme and start putting our network together,” she said at the Satellite 2019 conference in Washington, D.C. — they reportedly lack inter-satellite links but are otherwise functional. I’ve asked SpaceX for more information on this.

It makes sense: If you’re planning to put thousands (perhaps as many as 12,000 eventually) of satellites into orbit, you’ll need to test at scale and with production hardware.

And for those worried about the possibility of overpopulation in orbit — it’s absolutely something to consider, but many of these satellites will be flying at extremely low altitudes; at 550 kilometers up, these tiny satellites will naturally de-orbit in a handful of years. Even OneWeb’s, at 1,100 km, aren’t that high up — geosynchronous satellites are above 35,000 km. That doesn’t mean there’s no risk at all, but it does mean failed or abandoned satellites won’t stick around for long.

Just don’t expect to boot up your Starlink connection any time soon. It would take a minimum of six more launches like this one — a total of 420, a happy coincidence for Musk — to provide “minor” coverage. This would likely only be for testing as well, not commercial service. That would need 12 more launches, and dozens more to bring it to the point where it can compete with terrestrial broadband.

Even if it will take years to pull off, that is the plan. And by that time others will have spun up their operations as well. It’s an exciting time for space and for connectivity.

No launch time has been set as of this writing, so takeoff is just planned for Wednesday the 15th at present. As there’s no need to synchronize the launch with the movement of any particular celestial body, T-0 should be fairly flexible and SpaceX will likely just wait for the best weather and visibility. Delays are always a possibility, though, so don’t be surprised if this is pushed out to later in the week.

As always you’ll be able to watch the launch at the SpaceX website, but I’ll update this post with the live video link as soon as it’s available.

Powered by WPeMatico