None of the 19 domestic mobile phones were spared: 15 minutes to crack face recognition, printing glasses to make face brushing useless

Your new phone is good, can I brush your face?

Face recognition technology is already standard on smartphones. Today, we face unlocking and paying with face recognition are as natural as eating and drinking, so that during the epidemic, we will feel very uncomfortable when wearing a mask and unable to unlock the phone.

While enjoying convenience, few users care about security issues. Although mobile phone manufacturers often claim that “the chance of cracking face recognition is as low as one in a million” when releasing mobile phones, the incident of twins unlocking each other’s phones is still occasionally in the news.

Recently, RealAI (Ruilai Wisdom) from Tsinghua University showed us a simpler attack technology… Under the attack of a pair of glasses, none of the 19 domestic Android phones were spared, and all of them were quickly cracked.

Specifically, the RealAI team selected 20 mobile phones for attack testing, covering low-end and flagship phones at different price points.

  None of the 19 domestic mobile phones were spared: 15 minutes to crack face recognition, printing glasses to make face brushing useless

The test subjects wore a pair of glasses with a pattern of the adversarial sample, and the cost of making this pair of glasses is very low: borrow a printer, and add a piece of A4 paper.

  

Against glasses.

In the end, except for one iPhone11, all other Android models were successfully unlocked, and it only took 15 minutes to complete the entire cracking process. After the attack tester successfully unlocks the mobile phone, they can browse the personal privacy information such as WeChat, information, and photos of the host at will, and can even complete the account opening through the online identity authentication of personal applications such as mobile banking.

The RealAI team said that this attack test mainly exploited the “adversarial sample” vulnerability of artificial intelligence algorithms, but it was different from previous attack attempts that were mainly carried out in an experimental environment, and this mobile phone attack test proved this security vulnerability. real existence.

RealAI said that this is the only case in the world that uses AI adversarial sample technology to break through the face unlocking of commercial mobile phones.

A more serious problem is that this vulnerability involves all applications and devices equipped with face recognition functions. Once exploited by hackers, privacy security and property security will be threatened.

Use AI algorithm to design a layer of camouflage for “glasses”

The whole testing process is very simple. The RealAI team selected a total of 20 mobile phones, except for one iPhone11, the rest are Android models, from the top five domestic brands, and each brand selected 3-4 mobile phone models with different price points , covering low-end phones to flagship phones.

Before the start of the test, the 20 mobile phones were uniformly entered into the face verification information of the same tester, and then another tester who was an “attacker” put on the “glasses” with adversarial sample stunts to try to unlock them in turn. The end result was surprising: all the phones were successfully unlocked except for the iPhone 11, which survived. In terms of the degree of cracking, there is almost no difference in the difficulty of attacking these mobile phones, and they are all unlocked in seconds.

Testers said that although the security of low-end mobile phone face recognition is generally considered to be relatively poor, the strength of resisting attacks does not seem to be directly related to the price of mobile phones. Among them, there is a flagship phone newly released in December 2020. After many tests, I found that it was basically opened “all at once”.

The sudden success makes the researchers feel a bit incredible, knowing that in some hacker challenges, projects that challenge face recognition technology are often accompanied by several attempts and failures. “This result is quite unexpected. We thought that we would need to tune a few more times, but we didn’t expect it to be so easy to succeed.” RealAI’s algorithm staff said.

So how does the new attack method work?

According to reports, the entire cracking process used by RealAI physically uses only three things: a printer, a piece of A4 paper, and a pair of glasses frames.

The algorithm personnel introduced that after they got the photo of the victim, they generated an interference pattern in the eye area through the algorithm, and then printed out the shape cut into “glasses” and attached it to the frame, and the tester can wear it to achieve cracking. The whole process only takes about 15 minutes.

The left one is the eye image of the attacked object, and the right one and the right two are the generated adversarial sample patterns.

Similar to the adversarial samples of the generative adversarial network GAN, although the pattern on the “glasses” looks like replicating the eye pattern of the attacker, it is not so simple. The algorithm personnel said that this is a perturbation pattern generated by algorithm calculation by combining the image of the attacker and the image of the victim, which is called “adversarial sample” in the AI ​​academic circle.

Set the attacker’s image as the input value and the attacker’s image as the output value, and the algorithm will automatically calculate the best adversarial sample pattern to ensure that the similarity between the two images reaches the highest value.

Seemingly rough attack methods, the core confrontation algorithm research and development is actually very technical threshold.

But this does not mean that this security problem does not pose a threat. The RealAI team said, “Although it is very difficult to develop the core algorithm, if it is maliciously open sourced by hackers, the difficulty of getting started is greatly reduced, and the remaining work is just to find A photo.” The implication is that as long as they can get a photo of the attacked object, most people can quickly create a criminal tool to crack.

Against sample attacks, from the laboratory into reality

The concept of adversarial sample attack is actually not new. In 2013, Google researcher Szegedy and others found that machine learning is easy to be deceived. By deliberately adding subtle disturbances to the data source, the machine learning model can be made. It is regarded as a major concern in the field of AI security.

In some neural networks, the confidence that this image is considered to be a panda is 57.7%, and the confidence that it is classified as a panda category is the highest of all categories, so the network comes to a conclusion: There is a panda in the image . However, adding a small amount of carefully constructed noise yields an image (right) that looks almost identical to the left to a human, yet the network thinks it’s classified as “gibbons” with a 99.3 confidence level %.

The essence of information security is offensive and defensive, and the same is true in the field of AI security. Scientists test the capability boundaries of adversarial sample attacks by constantly launching new attack attempts.

In recent years, we have seen various attack methods demonstrated by AI researchers: let the image recognition algorithm recognize the 3D printed turtle as a rifle, attack the target detection system to make the human body “stealth”, crack the object recognition detector to make the automatic driving misidentify Stop sign…

However, there is a process in the development of technology. Many attack researches carried out in experimental environments often prove to be unstable, and it is difficult to get out of the laboratory and bring obvious security risks.

Including in August 2019, researchers from Moscow State University and Huawei’s Moscow Research Center once released that sticking a confrontation sample pattern on the forehead can make the public Face ID system identify errors, although this is regarded as the first AI algorithm. The attack is implemented in the real world, but the target of the attack is still the public version of the identification system, and its security and complexity are still far from the real commercial system.

The attack realized by the RealAI team this time really broke the “difficult to reproduce” situation. On the one hand, it confirmed the real threat of adversarial sample attacks, and on the other hand, it also confirmed that face recognition, an application technology used by tens of millions of people are facing new security challenges.

In recent years, there have been disputes about face recognition. Previously, it has also been exposed that “a printed photo can replace a real person’s face”, “use video to deceive face authentication”, “print 3D model to crack mobile phone face unlock” wait for security incidents.

However, RealAI algorithm personnel said that the common attack methods on the market are mainly “prosthetic attacks”, such as photos, dynamic videos, 3D head models or masks. The identification terminal still collects the image material of the owner himself, which is the main difficulty. It is to break through dynamic detection, but such attacks are now easy to prevent – the introduction of the anti-prosthesis standard in 2014 has enabled mainstream operators in the industry to be equipped with live detection capabilities.

Then came cyber attack methods that bypass liveness detection by hijacking cameras. However, the adversarial sample attack is not limited by live detection at all. It is an attack against the recognition algorithm model. The terminal collects the image of the attacker. After the live detection, the recognition algorithm has misidentified due to the addition of local disturbances.

“For face recognition applications, this is an attack method that has not been seen before,” explained the RealAI algorithm staff. “If face recognition is compared to a room, each loophole is equivalent to multiple unclosed windows in the room, and security authentication technologies such as live detection are equivalent to a lock. For manufacturers, they may think that This room has been closed, but the emergence of adversarial samples is definitely another window, and it has been completely undetected before, which is a new attack surface.”

Can we defend against this attack?

Today, with the proliferation of face recognition applications, face recognition is closely related to factors such as personal privacy, personal identity, personal property, etc. Once this hole is torn open, the chain reaction will be opened.

RealAI said that the reliability of the existing face recognition technology is far from enough. On the one hand, it is limited by the maturity of the technology, and on the other hand, it is ignored by the technology providers and application parties. “Unlocking the mobile phone is only the first step. In fact, we found through testing that many applications on the mobile phone, including government and financial applications, can be authenticated by resisting sample attacks, and even we can fake the owner to complete it online. Open a bank account, and the next step is to transfer money.”

Will there be special products and technologies to deal with adversarial sample attacks in the future? RealAI’s reply is that it is certain. And they have developed corresponding defense algorithms to assist mobile phone manufacturers to upgrade.

“The ultimate goal of all attack research is to find vulnerabilities, and then apply targeted patches and defenses.”

In this regard, RealAI launched RealSafe, an artificial intelligence security platform, last year. They define this product as an anti-virus software and firewall system for AI systems, mainly to upgrade defenses for application-level AI systems such as face recognition to help defend against security risks such as sample attacks.

For the provider of face recognition technology, based on this platform, safe iteration can be achieved quickly and at low cost; for the application of face recognition technology, the system application that has already been implemented can be safely upgraded through this platform, and it can also be used in the future. to strengthen the security detection of face recognition technology, related information systems and terminal equipment.

However, the concerns caused by face recognition technology are far more than that. In addition to technical solutions, the final filling of the loopholes also needs to rely on the society’s increased awareness of AI security issues.

 

  

Your new phone is good, can I brush your face?

Face recognition technology is already standard on smartphones. Today, we face unlocking and paying with face recognition are as natural as eating and drinking, so that during the epidemic, we will feel very uncomfortable when wearing a mask and unable to unlock the phone.

While enjoying convenience, few users care about security issues. Although mobile phone manufacturers often claim that “the chance of cracking face recognition is as low as one in a million” when releasing mobile phones, the incident of twins unlocking each other’s phones is still occasionally in the news.

Recently, RealAI (Ruilai Wisdom) from Tsinghua University showed us a simpler attack technology… Under the attack of a pair of glasses, none of the 19 domestic Android phones were spared, and all of them were quickly cracked.

Specifically, the RealAI team selected 20 mobile phones for attack testing, covering low-end and flagship phones at different price points.

  

The test subjects wore a pair of glasses with a pattern of the adversarial sample, and the cost of making this pair of glasses is very low: borrow a printer, and add a piece of A4 paper.

  

Against glasses.

In the end, except for one iPhone11, all other Android models were successfully unlocked, and it only took 15 minutes to complete the entire cracking process. After the attack tester successfully unlocks the mobile phone, they can browse the personal privacy information such as WeChat, information, and photos of the host at will, and can even complete the account opening through the online identity authentication of personal applications such as mobile banking.

The RealAI team said that this attack test mainly exploited the “adversarial sample” vulnerability of artificial intelligence algorithms, but it was different from previous attack attempts that were mainly carried out in an experimental environment, and this mobile phone attack test proved this security vulnerability. real existence.

RealAI said that this is the only case in the world that uses AI adversarial sample technology to break through the face unlocking of commercial mobile phones.

A more serious problem is that this vulnerability involves all applications and devices equipped with face recognition functions. Once exploited by hackers, privacy security and property security will be threatened.

Use AI algorithm to design a layer of camouflage for “glasses”

The whole testing process is very simple. The RealAI team selected a total of 20 mobile phones, except for one iPhone11, the rest are Android models, from the top five domestic brands, and each brand selected 3-4 mobile phone models with different price points , covering low-end phones to flagship phones.

Before the start of the test, the 20 mobile phones were uniformly entered into the face verification information of the same tester, and then another tester who was an “attacker” put on the “glasses” with adversarial sample stunts to try to unlock them in turn. The end result was surprising: all the phones were successfully unlocked except for the iPhone 11, which survived. In terms of the degree of cracking, there is almost no difference in the difficulty of attacking these mobile phones, and they are all unlocked in seconds.

Testers said that although the security of low-end mobile phone face recognition is generally considered to be relatively poor, the strength of resisting attacks does not seem to be directly related to the price of mobile phones. Among them, there is a flagship phone newly released in December 2020. After many tests, I found that it was basically opened “all at once”.

The sudden success makes the researchers feel a bit incredible, knowing that in some hacker challenges, projects that challenge face recognition technology are often accompanied by several attempts and failures. “This result is quite unexpected. We thought that we would need to tune a few more times, but we didn’t expect it to be so easy to succeed.” RealAI’s algorithm staff said.

So how does the new attack method work?

According to reports, the entire cracking process used by RealAI physically uses only three things: a printer, a piece of A4 paper, and a pair of glasses frames.

The algorithm personnel introduced that after they got the photo of the victim, they generated an interference pattern in the eye area through the algorithm, and then printed out the shape cut into “glasses” and attached it to the frame, and the tester can wear it to achieve cracking. The whole process only takes about 15 minutes.

The left one is the eye image of the attacked object, and the right one and the right two are the generated adversarial sample patterns.

Similar to the adversarial samples of the generative adversarial network GAN, although the pattern on the “glasses” looks like replicating the eye pattern of the attacker, it is not so simple. The algorithm personnel said that this is a perturbation pattern generated by algorithm calculation by combining the image of the attacker and the image of the victim, which is called “adversarial sample” in the AI ​​academic circle.

Set the attacker’s image as the input value and the attacker’s image as the output value, and the algorithm will automatically calculate the best adversarial sample pattern to ensure that the similarity between the two images reaches the highest value.

Seemingly rough attack methods, the core confrontation algorithm research and development is actually very technical threshold.

But this does not mean that this security problem does not pose a threat. The RealAI team said, “Although it is very difficult to develop the core algorithm, if it is maliciously open sourced by hackers, the difficulty of getting started is greatly reduced, and the remaining work is just to find A photo.” The implication is that as long as they can get a photo of the attacked object, most people can quickly create a criminal tool to crack.

Against sample attacks, from the laboratory into reality

The concept of adversarial sample attack is actually not new. In 2013, Google researcher Szegedy and others found that machine learning is easy to be deceived. By deliberately adding subtle disturbances to the data source, the machine learning model can be made. It is regarded as a major concern in the field of AI security.

In some neural networks, the confidence that this image is considered to be a panda is 57.7%, and the confidence that it is classified as a panda category is the highest of all categories, so the network comes to a conclusion: There is a panda in the image . However, adding a small amount of carefully constructed noise yields an image (right) that looks almost identical to the left to a human, yet the network thinks it’s classified as “gibbons” with a 99.3 confidence level %.

The essence of information security is offensive and defensive, and the same is true in the field of AI security. Scientists test the capability boundaries of adversarial sample attacks by constantly launching new attack attempts.

In recent years, we have seen various attack methods demonstrated by AI researchers: let the image recognition algorithm recognize the 3D printed turtle as a rifle, attack the target detection system to make the human body “stealth”, crack the object recognition detector to make the automatic driving misidentify Stop sign…

However, there is a process in the development of technology. Many attack researches carried out in experimental environments often prove to be unstable, and it is difficult to get out of the laboratory and bring obvious security risks.

Including in August 2019, researchers from Moscow State University and Huawei’s Moscow Research Center once released that sticking a confrontation sample pattern on the forehead can make the public Face ID system identify errors, although this is regarded as the first AI algorithm. The attack is implemented in the real world, but the target of the attack is still the public version of the identification system, and its security and complexity are still far from the real commercial system.

The attack realized by the RealAI team this time really broke the “difficult to reproduce” situation. On the one hand, it confirmed the real threat of adversarial sample attacks, and on the other hand, it also confirmed that face recognition, an application technology used by tens of millions of people are facing new security challenges.

In recent years, there have been disputes about face recognition. Previously, it has also been exposed that “a printed photo can replace a real person’s face”, “use video to deceive face authentication”, “print 3D model to crack mobile phone face unlock” wait for security incidents.

However, RealAI algorithm personnel said that the common attack methods on the market are mainly “prosthetic attacks”, such as photos, dynamic videos, 3D head models or masks. The identification terminal still collects the image material of the owner himself, which is the main difficulty. It is to break through dynamic detection, but such attacks are now easy to prevent – the introduction of the anti-prosthesis standard in 2014 has enabled mainstream operators in the industry to be equipped with live detection capabilities.

Then came cyber attack methods that bypass liveness detection by hijacking cameras. However, the adversarial sample attack is not limited by live detection at all. It is an attack against the recognition algorithm model. The terminal collects the image of the attacker. After the live detection, the recognition algorithm has misidentified due to the addition of local disturbances.

“For face recognition applications, this is an attack method that has not been seen before,” explained the RealAI algorithm staff. “If face recognition is compared to a room, each loophole is equivalent to multiple unclosed windows in the room, and security authentication technologies such as live detection are equivalent to a lock. For manufacturers, they may think that This room has been closed, but the emergence of adversarial samples is definitely another window, and it has been completely undetected before, which is a new attack surface.”

Can we defend against this attack?

Today, with the proliferation of face recognition applications, face recognition is closely related to factors such as personal privacy, personal identity, personal property, etc. Once this hole is torn open, the chain reaction will be opened.

RealAI said that the reliability of the existing face recognition technology is far from enough. On the one hand, it is limited by the maturity of the technology, and on the other hand, it is ignored by the technology providers and application parties. “Unlocking the mobile phone is only the first step. In fact, we found through testing that many applications on the mobile phone, including government and financial applications, can be authenticated by resisting sample attacks, and even we can fake the owner to complete it online. Open a bank account, and the next step is to transfer money.”

Will there be special products and technologies to deal with adversarial sample attacks in the future? RealAI’s reply is that it is certain. And they have developed corresponding defense algorithms to assist mobile phone manufacturers to upgrade.

“The ultimate goal of all attack research is to find vulnerabilities, and then apply targeted patches and defenses.”

In this regard, RealAI launched RealSafe, an artificial intelligence security platform, last year. They define this product as an anti-virus software and firewall system for AI systems, mainly to upgrade defenses for application-level AI systems such as face recognition to help defend against security risks such as sample attacks.

For the provider of face recognition technology, based on this platform, safe iteration can be achieved quickly and at low cost; for the application of face recognition technology, the system application that has already been implemented can be safely upgraded through this platform, and it can also be used in the future. to strengthen the security detection of face recognition technology, related information systems and terminal equipment.

However, the concerns caused by face recognition technology are far more than that. In addition to technical solutions, the final filling of the loopholes also needs to rely on the society’s increased awareness of AI security issues.

 

  

The Links:   VSKT250-12 KCS3224ASTT-X8