Technology

Robots and Cobots May Change Industry Forever

How would you like to have a robot assistant in your home or at work? It might seem like a sci-fi fantasy but it may be closer to reality than you think. Boston Dynamics is planning to Boston Dynamics wants to release their first commercial robot by the end of this year, and there are already a number of companies working to make collaborative robots or co-bots, safer and more interactive with humans. Researchers from MIT have figured out an algorithm that makes these robots more efficient without compromising the safety, which has been a major concern on factory floors. And biomedical researchers from the University of Houston made a major breakthrough in the brain to computer interfaces that lets prosthetics more accurately anticipate human motion. But before we go further, make sure you’re following us over at Minds.com/subverse where we’ll be posting additional and exclusive content going forward.

Boston Dynamic’s Spot is a four-legged electric robot that can pick up and carry objects with an extendable arm positioned on its so-called “head.” It has perception sensors, stereo and depth cameras, position and force sensors, and an inertial measurement unit, which is an electronic device that measures Spot’s force. The dog-like robot was shown off at the Re:MARS conference in Las Vegas, where robotics, machine learning, and space exploration come together in one place. At the conference, two Spot bots interacted with conference attendees, controlled by two Boston Dynamics employees using modified gaming tablets. Spot can keep itself balanced on uneven terrain and even withstand kicks and shoves, but can’t yet decide for itself where to walk. Once an area is mapped out, however, Spot is able to navigate its environment autonomously, but it still does need human handlers.

At the conference, the handlers showed the crowd the simplicity of operating Spot. With a simple directional pad, the robot can be steered. The control pad allows the operator to see the real-time video feed from its front-facing cameras. Another control allows the handler to operate the robotic arm mounted on its head. As with any new technology, there are still some issues to be worked out. During the demonstration, Spot’s legs suddenly folded up and it collapsed, taking a nose dive to the floor. Boston Dynamics CEO Marc Raibert said they’re still testing Spot in a number of work environments like package delivery and surveying work. Its three-dimensional cameras give it the ability to map construction sites, identify hazards, and track progress. But Spot has a wide range of custom uses and can undertake a variety of tasks. Its arm attachment gives it the ability to manipulate objects and even open doors. At the conference, Spot picked up a toy and offered it to a police dog.

The robots used in factories and warehouses currently only perform tasks that are programmed into them, usually around the time of their creation. Robots like Spot are able to work beside humans in changing environments, reacting to dynamic conditions and other hazards. Raibert showed an example of this on stage when he presented a video of Spot overcoming man-made obstacles while trying to open a door. There are, of course, questions about the actual advantage of employing an expensive robot over a human who can perform the same tasks. Boston Dynamics began developing robots like Spot for the US military over ten years ago. This led to other countries developing their own four-legged robots, like Chinese company Unitree and Swiss company ANYbotics. Boston Dynamics now has to figure out a way to scale the production of these robots to the demand. Raibert said they’re aiming to manufacture one thousand per year, but didn’t say how much they would be charging per robot, though their commercial version will be much less expensive than their competitors’ bots and their own prototypes. He did mention they already have paying customers, including Japanese construction companies who are testing Spot as a progress overseer on worksites.

According to the Verge, Raibert said: “There’s a remarkable number of construction companies we’re talking to, but we have some other applications that are very promising, [including] in hostile environments where the cost of having people there is high.”
There are plenty of uses for Spot that would prevent human workers from operating in unsafe conditions, like disaster zones.
|Robots have been working with human workers for years on factory floors and in warehouses. According to the International Federation of Robotics, there was an average of one hundred eighty-nine robots for every ten thousand workers in the US. Industrial robots that lift and move huge pieces of material for manufacturing are often bolted down to the floor behind fences away from their organic-bodied coworkers to keep them safe. But the next generation of robots are cobots, collaborative robots, which are increasingly mobile and interactive, but opens up the possibility of unintended injuries to the humans they work with.

There are a number of engineers and companies working on new technologies to minimize the risk to humans. Massachusetts-based Veo Robotics introduced Veo FreeMove on Monday, giving robots spatial awareness for obstacles and objects within reach. The company partnered with four of the largest robot manufacturers in the world to add three-dimensional depth sensors and computer vision software to their robots, giving them a monitoring system that signals the robot to stop or slow down if a human-sized object is too close. After the obstruction passes, the robot proceeds with its task. They conducted trials with manufacturers of vehicles, appliances, and packaged-goods, using Xbox Kinect cameras while they build their own sensors for future production.

Veo Ceo and co-founder Patrick Sobalvarro told CNBC, “The collaborative power and force-limited robots have been very useful for assembly of small things. What we would like to do is extend those advantages to all robots, regardless of the size. Whether it’s a robot that can carry a car or a robot that can carry a car door or a robot that moves fast and positions things very precisely.”

Cobots are getting lighter, more versatile, and more interactive with humans, opening them up to industries outside manufacturing and into fields like food services, law enforcement, and health care. They’re not meant to replace human workers, but act more like assistants, although they can be designed to operate with limited guidance or autonomously. Demand for these bots have been growing and companies are doing their best to reduce errors and increase the quality and speed of production. Sobalvarro explained, “What we hear from every factory, every line manager is that they can’t hire enough production workers. The production labor workforce is aging out, and one of the things we see as an advantage of our system is that physical strength will no longer be required for production workers. This company is predicated on the belief that production labor continues to be tremendously important in manufacturing.”

|Although these robots are fitted with safety procedures that keep humans safe, it often comes at the expense of productivity. MIT researchers working with BMW noticed that robots were overly cautious when operating around humans and would waste a lot of time just waiting for workers to pass. So they created a new algorithm that could increase efficiency while still prioritizing safety. The algorithm improves the robots’ ability to predict the trajectory of a human worker as they move, allowing it to move around the workers’ foot traffic instead of freezing up.

According to EurekaAlert, associate professor of aeronautics and astronautics at MIT, Julie Shah, said, “This algorithm builds in components that help a robot understand and monitor stops and overlaps in movement, which is a core part of human motion. This technique is one of the many ways we’re working on robots better understanding people.” Researchers have used algorithms from music and speech processing to align sets of related data, but human motion is more variable than music or speech. Even in repetitive processes, humans make slight variations in their movement each time. Other algorithms record streaming motion data in the form of dots that represent a person’s position over a time span, comparing those dots to a library of common trajectories. These algorithms can be easily confused in some pretty common scenarios, like a person’s temporary pause while moving, because the dots bunch up in one spot. To find a workaround, Shah and her colleague Pem Lasota created a partial trajectory algorithm that aligns segments of the trajectory with a library of previous reference trajectories in real time. This gives a robot the ability to accurately anticipate overlaps and stops in a human’s path.

“Say you’ve executed this much of a motion,” Lasota explains. “Old techniques will say, ‘this is the closest point on this representative trajectory for that motion.’ But since you only completed this much of it in a short amount of time, the timing part of the algorithm will say, ‘based on the timing, it’s unlikely that you’re already on your way back, because you just started your motion.’”

The research team found that their algorithm was better at estimating a person’s progress through a trajectory compared to commonly used trajectory alignment algorithms. With their algorithm, the robot was less prone to stopping, instead resuming its task right after a human crossed its path. This technique can be used as a preprocessing step for other human-robot interaction, like action recognition and gesture detection. Shah says this algorithm will be key in enabling robots to respond to human movement and behavior patterns, not just in factories, but in homes as well. Shah said, “This technique could apply to any environment where humans exhibit typical patterns of behavior. The key is that the [robotic] system can observe patterns that occur over and over so that it can learn something about human behavior. This is all in the vein of work of the robot better understand aspects of human motion, to be able to collaborate with us better.”

|A team of biomedical engineering professors from the University of Houston published a study in eNeuro last week showing a brain to computer interface can actually sense when its user is anticipating a reward through tracking interactions between single-neuron activities and information flowing through the neurons. This is called ‘local field potential,’ and the findings allow the development of a brain-computer interface that autonomously updates, self-improves, and learns about its subject without programming. This is an exciting implication for improvements in robotic prosthetics operating more naturally, allowing the prosthetic to sense what its user wants to do, like picking up an object. According to the University of Houston’s news release, Professor of Biomedical Engineering Joe Francis claims, “This will help prosthetics work the way the user wants them to. The brain-computer interface quickly interprets what you’re going to do and what you expect as far as whether the outcome will be good or bad.” He added that this drives scientists’ abilities to predict reward outcome from mid seventy to ninety-seven percent. Francis used implanted electrodes to examine spikes in brain activity during tasks to understand how the interactions are regulated by reward expectations. Even when a task called for no movement, just passive observation, the brain-computer interface was able to determine the intention based on the neural activity pattern that resembled movement.

Francis explained, “We assume the intention is in there, and we decode that information by an algorithm and have it control either a computer cursor, for example, or a robotic arm. … This is important because we are going to have to extract this information and brain activity out of people who cannot actually move, so this is our way of showing we can still get the information even if there is no movement. This examination of reward motivation in the primary motor cortex could be useful in developing an autonomously updating brain-machine interface.”

The potential for robots and automation is both exciting and concerning.

Hackers Release Border Control Data on Dark Web

There’s a reason that we focus on hackers, facial recognition, and data privacy at Subverse. We pay attention to the developments in emerging technologies because of the massive impact they have on society and individuals. We’re beginning to see some of the consequences of compiling massive amounts of private citizens data unfold in real time as hackers target these databases for valuable information that can be used to steal identities and generally throw a wrench of chaos into the systems of governments on local and national levels. A federal contractor’s database was recently breached, and the information was available as a free download on the dark web. Microsoft recently and quietly purged their facial recognition database, but as we all know, once something is online, it’s very difficult if not impossible, to actually scrub it from the record. And in Detroit, lawmakers and citizens are finally getting an opportunity to address the widespread use of facial recognition cameras operated in real time by law enforcement.

On Monday, US Customs officials released a statement explaining that one of their subcontractor databases had been breached by a cyber attack at the end of May. Photos of people entering and leaving one US port of entry over a month and a half were compromised, with initial reports claiming that less than one hundred thousand people were impacted. Federal officials also claimed that travel documents, passport, and identification information were not compromised. But they also claimed that none of the information had been disseminated on the dark web, however, The Register reported that a large cache of breached data from subcontractor firm Perceptics was offered as a free download on the dark web.

According to the Register, an individual using the pseudonym “Boris Bullet-Dodger” alerted them to the hack, and provided a list of files exfiltrated from Perceptics’ corporate network as proof. Perceptics makes license plate readers used by the US government and cities to identify and track citizens and immigrants. They are the sole provider of license plate readers used at land border crossing lanes in the US. According to the Register, Perceptics recently announced in a press release that is no longer on their site, they landed “a key contract by US Customs and Border Protection to replace existing LPR technology, and to install Perceptics next generation License Plate Readers (LPRs) at 43 US Border Patrol checkpoint lanes in Texas, New Mexico, Arizona, and California.”

Almost sixty-five thousand file names fit their scope of surveillance technology, including spreadsheets with locations, zip codes, image files with names of individuals, and documents associated with federal clients like ICE. Hundreds of gigabytes of data, including business plans, financial figures, and personal information was found on the dark web. At first, Customs and Border Patrol wouldn’t give information on which Subcontractor was involved, but the public statement they sent to Washington Post reporters on Monday included the name Perceptics in the title– CBP Perceptics Public Statement. CBP said that the copies of the license plate and traveler images collected by CBP were transferred to the subcontractor’s network in violation of the federal agency’s security and privacy rules.

They say no CBP systems were compromised, and the CBP spokesperson Jackie Wren was not able to confirm if Perceptics was the source of the breach. A US official who spoke to the Washington Post anonymously said that internally it was being described as a ‘major incident.’ They said that Perceptics was trying to use the data to refine their algorithms to match license plates with the faces of the vehicle’s occupants, outside of CBP’s sanctioned use. Civil rights and privacy advocates say the theft shows that these government databases are going to be major targets for cybercriminals and hackers across the world.

Senior legislative council member at the ACLU Neema Singh Guliani said, “This breach comes just as CBP seeks to expand its massive face recognition apparatus and collection of sensitive information from travelers, including license plate information and social media identifiers. This incident further underscores the need to put the brakes on these efforts and for Congress to investigate the agency’s data practices. The best way to avoid breaches of sensitive personal data is not to collect and retain it in the first place.”

The incident also stirred panic in Congress, where lawmakers have been questioning whether federal surveillance measures are threatening individuals’ constitutional rights and risking identity theft of millions of citizens. In a statement to the Washington Post, Senator Ron Wyden said, “If the government collects sensitive information about Americans, it is responsible for protecting it — and that’s just as true if it contracts with a private company. Anyone whose information was compromised should be notified by Customs, and the government needs to explain exactly how it intends to prevent this kind of breach from happening in the future.”

Microsoft’s president, Brad Smith, appealed to Congress last year to take steps to manage that have what he called, “broad societal ramifications for potential abuse,” stating: “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This, in fact, is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

As we reported before, Microsoft blocked sales of facial recognition technology to law enforcement in California, but last week Microsoft deleted their database containing over ten million images of around one hundred thousand people. The database was the largest publicly available facial recognition data set in the world, used to train facial recognition systems by military researchers and other global tech firms. The people they pulled the photos from were not asked for consent because their photos were under a creative commons license online. Because of this, the individuals were considered ‘celebrities,’ and the database was called MS Celeb. Berlin-based researcher Adam Harvey, who revealed the database, runs a project called Megapixels which shows the details of these data sets. Harvey told the Financial Times, “Microsoft has exploited the term ‘celebrity’ to include people who merely work online and have a digital identity. Many people in the target list are even vocal critics of the very technology Microsoft is using their name and biometric information to build.”

He said even with the data set deleted, the contents are still being disseminated around the web. “You can’t make a data set disappear. Once you post it, and people download it, it exists on hard drives all over the world. Now it is completely disassociated from any licensing, rules or controls that Microsoft previously had over it. People are posting it on GitHub, hosting the files on Dropbox and Baidu Cloud, so there is no way from stopping them from continuing to post it and use it for their own purposes.” The Financial Times published an in-depth investigation in April on the technology and Microsoft’s role. Microsoft explained to them that the recent deletion was protocol, that the site was intended for academic purposes and was run by an employee who is no longer with Microsoft.

Amazon is following Microsoft in acknowledging the risks of their facial recognition services and is also calling for the federal government to put national regulations into place for the technology. The CEO of Amazon Web Services, Andy Jassy, told Kara Swisher at Vox’s Code 2019 conference: “Whether it’s private-sector companies or our police forces, you have to be accountable for your actions and you have to be held responsible if you misuse it.” Amazon Rekognition uses AI to identify faces in videos and photos. Eighty-five civil liberties groups have criticized Amazon for selling their system to governments and law enforcement and studies have shown that Amazon’s Rekognition has higher rates of misidentification for females with darker skin than males with lighter skin.

Although law enforcement in Florida and Oregon are already using facial recognition, Detroit and Chicago are the first cities in the US to use facial recognition technology with capabilities to work in real time. These are similar to systems the FBI and federal border agencies use at many US ports of entry. The system used in Detroit has a million dollar face-scanning system that allows law enforcement to identify and track residents on private and public high-def cameras at strategic locations. From these images, Detroit law enforcement can identify anyone at any time using their database consisting of hundreds of thousands of mug shots, driver’s licenses, and photos taken from social media.

Detroit Police slipped this system into place without any public hearings or announcements, and integrated the technology with their Project Green Light, an initiative started in 2016 using surveillance cameras at late-night gathering spots like gas stations and fast-food restaurants. Since then, they’ve expanded the system to apartments, lower-income housing, churches, parks, schools, hotels, and health clinics, including addiction treatment centers. Altogether, there are over five hundred Green Light locations. Detroit police defended their use of the technology, claiming it’s only used to track down suspects after a crime is committed.

Detroit Police Department’s Assistant Chief Dave LeValley told the Detroit Metro Times, “DPD does not use live streaming videos with facial recognition software. Videos fed into the Real Time Crime Center are used only to obtain still images of an individual suspected in a criminal offense for purposes of identifying the suspect. Those still images are used to search known databases or repositories of criminal mugshots, state driver’s license photographs, and state identification card photographs. Any images taken during a First Amendment-protected public event, activity, or affiliation would only be used under exigent circumstances that would require the express approval of the Chief or his designee and a report to the Board of Police Commissioners after such use.”

Local elected officials haven’t been very vocal about the initiative until now. The Detroit Police Commission is hosting a public hearing tomorrow to discuss the surveillance system, and privacy advocates are distributing flyers to inform the community about what’s going on. Michigan Representative Rashida Tlaib has been outspoken against Facial Recognition technology during public hearings in late May and early June. She believes Detroit should be more open about their public use of these systems because of the potential for abuse and the number of misidentifications. Tlaib told the Detroit Metro Times, “I’ve heard from local groups, community members, and civil rights advocates: facial recognition technology is a flawed system riddled with privacy and constitutional concerns. I support a moratorium on its use in law enforcement. The lack of public input and transparency is alarming. The use of facial recognition with little to no real oversight or research endangers all our lives directly.” There has been a major push by local and national lawmakers on both sides of the aisle to halt the use of facial recognition until a legal framework can be worked out that protects peoples’ privacy and prevents misidentification.

As always, we will continue to keep an eye on these subjects and give you updates. Stay tuned for more videos Monday through Thursday at 7 pm eastern and be sure to follow us at minds.com/subverse where you can join our online newsroom to discuss our coverage and point out stories to us that you think need more coverage. Thanks for watching, and we’ll see you next time.

USA and Japan Plan For 2024 Moon Base

Getting back into space has been on a lot of people’s minds lately, especially world leaders. At a joint press conference in Tokyo earlier this week, Prime Minister Shinzo Abe and President Donald Trump agreed to further cooperate in space exploration, which might include sending Japanese astronauts to the moon.

During the press conference, Trump said, “I am pleased to confirm that Prime Minister Abe and I have agreed to dramatically expand our nations’ cooperation in human space exploration. Japan will join our mission to send U.S. astronauts to space. We’ll be going to the moon. We’ll be going to Mars very soon. It’s very exciting.”

A fact sheet released by the State Department noted that the two leaders “agreed on the importance of a sustained human presence on and around the Moon. Building on its International Space Station experience, Japanese astronauts will strive to join American astronauts on the Moon and destinations beyond.”

NASA is accelerating their plans to put people on the moon’s surface by 2028, setting the new goal to 2024. The major roles for NASA’s international partners will largely be deferred to the second phase, focusing on establishing a sustainable human presence after the 2024 landing. The contributions for the partners would include Gateway Modules, which give countries slots on lunar landing missions similar to the way ISS partners get crew slot on the space station.

Ken Bowersox, the deputy associate administrator for human exploration and operations at NASA said during an advisory council committee meeting, “Accelerating the landing date to 2024 makes it harder for us to incorporate our international partners early. We’re still looking at working with our international partners. A lot of their elements were going to come after 2024 anyway.” Bowersox added that if international partners accelerate their contributions, they’re welcome to participate in the early phases.

One of the Japanese companies looking forward to this further cooperation with the US is iSpace, which develops commercial lunar landers as part of a team led by an American company called Draper. Draper won one of nine commercial lunar payload service agreements from NASA last year to transport payloads to the surface of the moon. In a statement to SpaceNews, iSpace founder Takeshi Hakamada said, “We are thrilled to learn that the U.S. and Japan will deepen its strong relationship in space exploration through a focused effort on lunar exploration. Alongside our American partner, Draper, iSpace is well prepared and eager to support this new endeavor between the U.S. and Japan.”

Though the accelerated plan seems ambitious, much of the equipment and hardware needed for the project are already in development or will be soon. Lucky for NASA, there are a number of companies with ideas for developing lunar landers. Earlier this month, Jeff Bezos’ Blue Origin showed off their Blue Moon lunar lander in its current iteration. Blue Moon can carry three point six metric tons to the moon using a new rocket engine Blue Origin is developing that’s powered by liquid hydrogen and liquid oxygen. In April, Lockheed Martin showed their concepts for lunar landers, which differed from their original lander idea. The original design was a giant single-stage reusable lander that could operate for two weeks on the moon’s surface and carry four people. Moving the deadline to 2024 forced the company to table that ambitious design for a smaller two-stage lander that could be built more quickly. The deadline shift also changed the various companies roles in the development of the landers. Before, the different companies would develop the three separate elements of the lander, the overall architecture, and integration of the components would be overseen by NASA. The new deadline puts less control in the hands of NASA and puts more of the responsibility on the companies. This gives the companies more flexibility in alternative approaches, rather than the original three-stage concept by NASA. This changes things, but it will be interesting to watch how the companies rise to the challenge. We plan on continuing our coverage on space, from the satellites orbiting our planet, to the exploration of our solar system and beyond, so stay tuned for more videos.

Outrage Over Massive Satellite Expansion

When mankind first stared up at the stars, they made up stories about bears, archers, crabs, and Gods. Now when people look up at the stars, they see more possibilities. Some think “we ought to have some kind of planetary defense system up there.” Other people think “can satellites make the internet better?” And some wonder, “why don’t we have a moon base yet?” Well, we’re making progress on these things.
MINDS
A few weeks ago we reported on the first wave Starlink satellites launched into orbit. Starlink is SpaceX’s venture to bring low-latency broadband internet to unserved or underserved areas around the globe. SpaceX overcame plenty of obstacles leading up to that launch, but even now with the first sixty satellites in orbit, Starlink is facing criticism over the impact of cluttering the sky with false stars.

Earlier this week, SpaceX had a different kind of launch — a new website for Starlink, which details what it is and how it works. The site says that after just six launches, Starlink should be able to provide coverage to the Northern US and Canada. By the twenty-fourth launch, Starlink will have about fifteen hundred satellites in orbit to provide internet to the populated world. The end goal is around twelve thousand satellites circling the Earth. By the end of this year alone, SpaceX is aiming to have up to six launches completed. With a plan to launch sixty satellites every six to eight weeks, even if they can’t hit six launches by the end of 2019, the amount of coverage Starlink would provide in 2020 puts them way ahead of most competitors.

As we’ve mentioned before, OneWeb is still currently the closest competition to Starlink, but Amazon’s Project Kuiper, Telesat, and Leosat are two to five years away from launching their satellites. When their internet services will be available is not yet clear, but it’s going to take a while. At that point, the sky is going to be FILLED with false stars. Astronomers already have beef with Starlink, raising questions over the ethics of a single company, let alone a handful of companies, changing the appearance of the sky.

The night after the Starlink launch, amateur astronomer Marco Langbrook captured footage from the Netherlands of the train of satellites taking orbit around the Earth. According to Forbes, Langbrook said, “What I had not anticipated was how bright the objects were and how spectacular a view it would be. It really was an incredible and bizarre view to see that whole train of objects in a line moving across the sky.”

Astronomers have raised questions about how the constellations of satellites will affect ground-based astronomy and add congestion to the orbital environment. A portion of the satellites will operate at or close to the frequencies radio astronomers use to study the cosmos. According to National Geographic, Lise van Zee, chair of the National Academy of Science’s Committee on Radio Frequencies, or CORF, said: “As a general principle, radio astronomy facilities are particularly vulnerable to satellite downlinks and to airborne uses, as radio telescopes cannot be protected from high-altitude transmissions through geographical shielding alone.” van Zee says that a coordination agreement regarding Starlink is in the works to balance the interests of science and telecom companies. Although both SpaceX and OneWeb are working out these agreements with the National Science Foundation and National Radio Astronomy Observatory, Harvey List of the NRAO says they keep changing the parameters of their satellites without updating them.

Beyond radio astronomy, visual astronomers will have to deal with the satellites crossing through their research images. Musk has said before these satellites will barely be noticeable, but a few days after the launch, Musk addressed the Starlink team about reducing the reflectivity, tweeting that he “sent a note to Starlink team last week specifically regarding albedo reduction. We’ll get a better sense of the value of this when satellites have raised orbits & arrays are tracking to sun.”

The actual impact of Starlink isn’t known yet, but astronomers are preparing for streaky skies. There are currently about five thousand satellites orbiting the planet so astronomers have dealt with the occasional satellite before, but the amount is about to increase dramatically over the next few years. Starlink alone is going to triple the number. At night, the satellites are likely not going to be visible, because they will be in darkness with no sunlight to reflect, but before sunrise and after sunset, the thousands of satellites will be visible. When Starlink satellites flare their solar arrays to the right angle, the sunlight reflected toward the earth boosts their brightness close to the levels of Venus or Jupiter. Bruce Macintosh from Stanford University noted that one of the major telescopes projects of the next ten years, called the Large Synoptic Survey Telescope will probably have anywhere between one and four Starlink satellites in each image within an hour or two of twilight.

In response to the concerns of cluttering the night sky, Musk tweeted, that the satellites will be in darkness when the stars are visible. He also pointed out, “there are already forty-nine hundred satellites in orbit, which people notice around zero percent of the time. Starlink won’t be seen by anyone unless looking very carefully & will have around zero impact on advancements in astronomy. We need to move telescopes to orbit anyway. Atmospheric attenuation is terrible.”

Some pointed out that helping the billions of people without internet access is worth the price of seeing the satellites twice a day. Musk agreed, tweeting “Potentially helping billions of economically disadvantaged people is the greater good. That said, we’ll make sure Starlink has no material effect on discoveries in astronomy. We care a great deal about science.” He also entertained the idea of sending Starlink telescopes into space as well.

Amazon Shareholders Reject Concern Over Facial Recognition, AI National Strategy Bill

Some Amazon shareholders are concerned about the company’s facial recognition software being sold to law enforcement, but not enough want to stop or investigate how it’s being used. A UK court says there’s no legal framework yet to rule on a facial recognition case brought against the police, so the first ruling will set legal precedent. A bipartisan bill introduced to the US Senate last week is trying to tackle the lack of legislation around Artificial Intelligence and hopes to keep the US at the forefront of this rapidly advancing technology.

AI is Now BETTER at Diagnosing than Doctors

There are now websites and apps that can take self-reported symptoms from patients and give accurate diagnoses. Researchers have developed deep-learning computers that are now comparable, if not better, than human doctors when diagnosing certain diseases and conditions. What does this mean for the future of medical professions?

Hackers Have Taken Over Baltimore Infrastructure, US Cities Need Better Cybersecurity

More than 20 US cities have been victims of cyber attacks since the start of 2019. Municipal governments are lacking in cybersecurity and it’s costing taxpayers millions of dollars in new devices and network infrastructure. Some firms claim they have tools to get your data back, but do they really?

Self-driving Trucks Deployed by US Postal Service, Whose Jobs are at Risk?

Autonomous vehicles are coming, and some are already here. Today, the US Postal Service deployed autonomous trucks in the Southwest as part of a pilot program. This comes a week after electric autonomous trucks were deployed on a public industrial road in Sweden. What does this mean for the trucking industry?

The New Space Race is Here: Global Satellite Internet

Rural America is under-served or un-served by ISPs. The satellite internet space race between SpaceX’s Starlink network, OneWeb, and Amazon’s Project Kuiper will bring new competition to telecommunication companies.

Scroll to top