Panda Security, Author at Panda Security Mediacenter https://www.pandasecurity.com/en/mediacenter/author/francesca/ All the info about your cybersecurity Wed, 13 Sep 2023 14:06:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.pandasecurity.com/en/mediacenter/src/uploads/2016/11/cropped-favicon-1-32x32.png Panda Security, Author at Panda Security Mediacenter https://www.pandasecurity.com/en/mediacenter/author/francesca/ 32 32 Are kids using ChatGPT for school projects? https://www.pandasecurity.com/en/mediacenter/family-safety/kids-chatgpt-school-projects/ https://www.pandasecurity.com/en/mediacenter/family-safety/kids-chatgpt-school-projects/#respond Wed, 06 Sep 2023 06:07:52 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29986

The short answer is yes; they are using ChatGPT at schools. AI tools are being used by children in the USA.

The post Are kids using ChatGPT for school projects? appeared first on Panda Security Mediacenter.

]]>

Students and parents must know that ChatGPT and its variations come with a whole list of associated risks.

The short answer is yes; they are using ChatGPT at schools. Even though most AI tools strictly say that the minimum age to use ChatGPT is 18, many students bypass the rule and use ChatGPT to generate content later submitted as their own. OpenAI’s ChatGPT is one of many options; other tools are either based on ChatGPT or developed by OpenAI competitors, i.e., Microsoft’s Bing Chat and Google’s Bard AI. University students use it for assignments, too. It is an undisputed fact, that AI tools are being used by children in the USA.

ChatGPT’s risks

However, students and parents must know that ChatGPT and its variations come with a whole list of associated risks. Educational institutions are getting up to speed fast, and students could be accused of cheating if caught using AI tools such as ChatGPT. Even though there are no perfect plagiarism checkers yet, multiple tools out there could detect ingenuine content, and those tools are readily available to teachers. Such plagiarism checkers are often unnecessary as teachers familiar with their field of study can easily recognize made-up things and untruthful content. AI chatbots sometimes made-up dates, facts, and even articles, and students often learn this the hard way.

ALSO READ: Back-to-school cybersecurity tips for parents and children

Other associated risks with such tools come with the recently reported ‘dumbing down‘ of ChatGPT and other similar services. OpenAI has openly denied such claims, but many users noticed a significant decrease in the power/knowledge of ChatGPT over the last few months. Even though the ‘dumbing down’ is disputed by many, students must know that chatbots might not be a cure-all and would still need to research their topics thoroughly. Some people address chatbots as glorified typewriters unable to generate new content and ideas but only limited to using what’s already available. And the availability of AI bots is controlled by the companies’ owners.

Future-proofing must also be a concern for students using the new technology. Students must remember that if they’ve managed to pass an exam or submit an assignment at some point, this may fire back later. Universities and schools may one day return to reexamine the work and realize that the generated content was plagiarized, which could void a graduation diploma or a certificate. Students must know that ChatGPT is still highly unregulated.

ALSO READ: Back to school cyber security tips

Conversing with an AI-based chatbot may be exciting, but students need to know that the replies they receive might not be informative. Answers could be dangerously misleading. The bot might be pushing a political agenda and/or mispresenting facts. ChatGPT has shown political bias and has been biased on many sensitive topics. Often, chatbots notify users that they could potentially display inaccurate or offensive information… and this is the cold truth.

Students might feel tempted to take advantage of the tool and use it to write a book report to save time. However, actions have consequences, and it is possible that relying on the new, undeveloped technology comes with risks, and at least for now, reading the book and doing all the work the old-fashioned way is the safest method. Many lawyers and “journalists” have already learned this lesson the hard way.

The post Are kids using ChatGPT for school projects? appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/family-safety/kids-chatgpt-school-projects/feed/ 0
UK AI usage explodes https://www.pandasecurity.com/en/mediacenter/news/uk-ai-usage-explodes/ https://www.pandasecurity.com/en/mediacenter/news/uk-ai-usage-explodes/#respond Mon, 04 Sep 2023 13:05:44 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29983

10 Brits are now using AI tools at least once a day. UK users have embraced generative AI technology at a surprisingly rapid rate.

The post UK AI usage explodes appeared first on Panda Security Mediacenter.

]]>

A new survey suggests that British citizens are embracing generative AI technologies like ChatGPT at a phenomenal rate

Sometimes the hype surrounding new technologies far outweighs the reality. But a new survey suggests that British citizens are embracing generative AI technologies like ChatGPT at a phenomenal rate.

When questioned by accounting group, 26% of UK adults have used a generative AI service, such as intelligent chatbots. This is equivalent to 13 million British citizens. And one in 10 Brits are now using AI tools at least once a day.

ALSO READ: Is the UK about to steal a lead in the AI race?

Not just for fun

A lot of people have been ‘playing’ with ChatGPT to see what the platform is capable of. However, one in 10 (4 million people) are using generative AI for work.

The ability to produce convincing text and images has helped workers become more productive by allowing them to perform common tasks like writing emails, creating artwork or conduct research more quickly.

Surprisingly rapid adoption

Often new technologies take a long time to experience widespread adoption. Analysts note that it took five years for smart speakers like Amazon Alexa, Apple HomePod and Google Nest, to attract similar levels of uptake.

ChatGPT has experienced no such difficulties as people have embraced the platform from almost the moment it was released to the public. Undoubtedly some of this success is due to the fact that no special hardware is required to interact with generative AI – you can access most services from a PC or smartphone app. Smart speakers require users to purchase specialist hardware – which can be quite expensive in the case of Apple devices. By lowering the barrier to entry, generative AI has overcome many of the problems inherent with other new technologies.

A word of warning

There was one finding in the Deloitte survey that should cause some concern. 40% of those questioned said that they believe generative AI systems always produce factually correct answers. Sadly, this is not true.

AI systems are only as accurate as the data used to train them. If the algorithm has received factually incorrect data during training, it is likely that the results of any queries will also be incorrect.

It is also important to note that some AI systems are only able to refer to historical training data. In the case of ChatGPT, no new data has been introduced into the system since September 2021 – meaning that any ‘factual’ information it produces could be two years (or more) out of date.

Exciting times ahead

The fact that British users are embracing generative AI is positive for the industry as a whole. User demand will drive new innovations and improvements, ensuring that the technology becomes even more useful – and valuable.

ALSO READ: Cybersecurity survey: 36% of Europeans don’t even have an IoT device

The post UK AI usage explodes appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/news/uk-ai-usage-explodes/feed/ 0
How to protect your personal data when using ChatGPT and generative AI https://www.pandasecurity.com/en/mediacenter/technology/protect-personal-data-chatgpt/ https://www.pandasecurity.com/en/mediacenter/technology/protect-personal-data-chatgpt/#respond Wed, 30 Aug 2023 05:18:15 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29973

Artificial intelligence systems work by using as much data as they can gather – so how can you protect your privacy?

The post How to protect your personal data when using ChatGPT and generative AI appeared first on Panda Security Mediacenter.

]]>

When it comes to free apps and websites, most of these services rely on collecting – and selling – personal data. Google and Facebook are well known for building detailed profiles of everyone who ever uses their services, using that information to sell targeted advertising.

Generative artificial intelligence systems like ChatGPT are very similar – they collect as much data as they can to help improve the accuracy and performance of their algorithms. So as generative artificial intelligence tools like ChatGPT become a part of our everyday lives, how can you maintain your privacy?

How can you maintain your privacy while using ChatGPT and generative AI?

Be aware of what you are giving away

Unfortunately, the terms and conditions of most online services tend to be extremely complex and hard to understand – often intentionally so. However, failing to read these documents means that you never really understand what you are giving away – or how the platform will use your data in future.

In the case of ChatGPT, any information you type into the chat prompt will be stored and analyzed to help further improve their service. You can probably assume that any generative AI platform will do the same. So you should be very careful about sharing sensitive personal information with these systems.

Everything is vulnerable online

As the recent data breach at the Police Service of Northern Ireland has shown, even the most secure, sensitive IT systems can be breached. Although the AI providers invest time and money into securing their systems, there will always be a risk that they too will be targeted by hackers. And if the criminals manage to break into the generative AI platform, they may also steal your sensitive personal information.

Again, you must be very conscious and careful about the information you share with generative AI systems – and what may happen if that information was leaked or stolen.

Adjust your privacy settings

Not every generative AI tool offers privacy settings, but you must use those that do. Both Google and Microsoft expect artificial intelligence to be an integral part of their services in future, so the privacy tools for their AI services are included with the controls for the rest of your account.

This means that you can choose to have data shared with Google Bard automatically deleted periodically for instance. Similarly, Microsoft allows you to review your search history and delete anything (or everything) you no longer want to share.

By using the trash can icon at the bottom of the ChatGPT window, you can immediately delete the contents of your chat when you have finished. You can also choose to prevent any of your inputs being saved in the Data controls setting. Obviously OpenAI suggest you don’t disable this setting – but it is the only way to maintain full control of your data.

As always, the best way to protect your personal data is to stay alert – what information am I sharing and how could the generative AI system use it in future. If you have any concerns at all, it’s probably best not to use the system until you are sure you are safe.

The post How to protect your personal data when using ChatGPT and generative AI appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/technology/protect-personal-data-chatgpt/feed/ 0
Rise in UK IT degree applicants driven by AI advances https://www.pandasecurity.com/en/mediacenter/news/rise-uk-it-degree-applicants/ https://www.pandasecurity.com/en/mediacenter/news/rise-uk-it-degree-applicants/#respond Mon, 28 Aug 2023 10:13:34 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29970

Students are keen to join the AI revolution, driving up demand for IT degree places.

The post Rise in UK IT degree applicants driven by AI advances appeared first on Panda Security Mediacenter.

]]>

As August draws to a close, British students have been receiving the results of their A-level exams. Naturally their attention is now shifting towards the new semester and what they will be studying at university.

This year universities have recorded a rise in applications to study IT-related courses. UCAS, the University and Colleges Admission Service which oversees university place allocations in the UK, reports that IT applications have increased by almost 10% since 2022.

AI is driving an increased interest in undergraduate IT studies

Applications to study IT courses have increased steadily since 2019. UCAS chief executive Clare Marchant said she believes the 2023 increases are driven by “the rise of digital and AI”.

Marchant went on to say, “We know that changes in the world around us translate into increased demand for certain courses, as we saw for economics post-2008, and for medicine and nursing during the Covid-19 pandemic.” She credits the growing public conversation around technology and artificial intelligence for increased interest in computing courses.

Vanessa Wilson, of the UK University Alliance agrees, “The rise in the popularity of computing may well be a response to increasing awareness of the role of technologies such as AI, as well as a strong desire from students to develop what they see as future-proof skills.”

Are digital skills the future?

Software engineering has been the most popular computing course, with applications increasing by 16% since last year. Pure computer science degrees are up 11%, computer games and animation up 2% and artificial intelligence (AI) up 4%.

The chief executive of the British Computing Society, Rashik Parmar commented, “Teenagers in the UK know that AI will change the world forever; it shouldn’t surprise us to see this soaring demand for computing degrees”.

Increased interest in computing and AI disciplines is good news for the UK. The British government has recently announced plans to help the country become a world-leader in artificial intelligence technologies and disciplines. But to make these plans work, there will need to be an increase in the number of skilled workers – which is why the rise in IT degree applications is so important.

There was some slightly disappointing news however. Only 18% of applications were made by women. This means that although this figure has grown by 1% since 2023, computing and IT remains a male-dominated industry.

And although 95,000 people applied to study IT courses, this is still far below other subjects. In fact, computing is just the seventh most popular field of study. Business and Management related degrees remain the most popular in the UK, along with design, creative and performing arts courses, medicine, social sciences, biological and sports sciences, and engineering and technology.

But the increase in IT interest is welcome – and seems likely to continue in the years to come.

The post Rise in UK IT degree applicants driven by AI advances appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/news/rise-uk-it-degree-applicants/feed/ 0
Google’s Enhanced Safe Browsing Explained https://www.pandasecurity.com/en/mediacenter/security/googles-enhanced-safe-browsing/ https://www.pandasecurity.com/en/mediacenter/security/googles-enhanced-safe-browsing/#respond Thu, 24 Aug 2023 05:28:58 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29930

Google has been aggressively pushing its users to enable Enhanced Safe Browsing. What is Enhanced Safe Browsing, and how does it work?

The post Google’s Enhanced Safe Browsing Explained appeared first on Panda Security Mediacenter.

]]>

Google has been aggressively pushing its users to enable Enhanced Safe Browsing. Bleeping Computer reported that the message to enable the security feature appears even after users reject the invitation. Google insists that this will help users stay safe, and users with the feature enabled are 35% less likely to become victims of online scams. However, turning it on comes with a few drawbacks, including giving Alphabet more detailed access to user browsing habits, associated accounts, and overall online behavior.

READ ALSO: What Is HTTPS? A Guide to Secure Browsing and Sharing

What is Enhanced Safe Browsing, and how does it work?

The feature is not new. A version of it has been around for more than fifteen years. The tool had a facelift a few years ago, and Google had another push. Google stated that when users enable Enhanced Safe Browsing, Chrome activates a cyber security feature that allows live accurate threat assessment. In real-time, Google knows which sites users visit and checks whether the site is blacklisted or flagged for malicious activities. The feature also sends parts of downloaded files for investigation if Google thinks those files could be malicious. If the analysis determines possible threats, it starts preventing other users from being able to download them and warns others when entering the questionable websites hosting such files.

Why the concerns?

The fact that Alphabet’s Google is actively pushing its users to enable the feature raises some privacy concerns. The tech giant already collects vast amounts of data on its users, and many believe that by enabling this feature on Chrome, users might start sharing even more than before with the tech conglomerate. Google admits that the stored data is temporarily linked to an associated account, used for some time, and then anonymized. Hence, it is no longer connected to the profile that gathered it. However, cyber security experts confirm that the collected data could easily be connected to real persons only using information publicly available online.

Should you trust it?

Google, and its partners, already know a lot about you, so if privacy is of little importance to you, enabling the feature might be helpful. By allowing the tool to operate, you get some protection and help Google protect other users. Keeping the feature off might be your best option if you prefer not to share as much with big tech. Some people choose to enable the feature to stop receiving constant reminders to turn it on.

READ ALSO: Top 10 tips for safer, more secure web browsing

Is it enough? No, not really. Even though the feature could be helpful, having proper antivirus software installed on all connected devices is necessary. Antivirus software prevents users from being in the wrong place and time and often comes with features such as VPN that allow safe browsing without compromising privacy.

The post Google’s Enhanced Safe Browsing Explained appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/security/googles-enhanced-safe-browsing/feed/ 0
What is WormGPT? https://www.pandasecurity.com/en/mediacenter/technology/what-is-wormgpt/ https://www.pandasecurity.com/en/mediacenter/technology/what-is-wormgpt/#respond Mon, 21 Aug 2023 09:00:21 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29927

What is WormGPT and what does it do? Like all technologies, hackers have found a way to use AI to commit online crime.

The post What is WormGPT? appeared first on Panda Security Mediacenter.

]]>

Artificial intelligence (AI) tools are set to revolutionize the way work, automating common tasks to help us all be more productive. Unfortunately, AI can also be used for illegal activities – as the new WormGPT system shows.

What is WormGPT and what does it do?

By now you have probably heard of ChatGPT, the generative AI engine which provides highly accurate answers to questions. The genius of ChatGPT is how accurate and human-sounding those responses are, particularly when the AI engine is asked to ‘write’ something quite long and complex, like a blog article or a poem.

Now hackers have got into the game with WormGPT, a generative AI platform designed to assist with criminal activities. According to researchers, WormGPT is being promoted on darknet forums as “biggest enemy of the well-known ChatGPT that lets you do all sorts of illegal stuff.”

Specifically, WormGPT automates the creation of highly convincing fake emails that are personalized to the recipient. Because of this high degree of personalization, they are far more likely to trick people into disclosing passwords or installing malware.

Is there anything else I need to know?

At present, WormGPT is primarily concerned with writing effective phishing emails. But like ChatGPT, WormGPT can be used to write code automatically – including malware and cybersecurity exploits. It is likely that these AI tools will help hackers develop malware faster – which means we may see an increase in new attacks in the near future.

At the same time, criminals are also investing resources into ‘breaking’ other generative AI platforms. Some bad actors are now promoting “jailbreaks” for ChatGPT, hacks designed to extract sensitive information. Others are using the built-in API to manipulate to manipulate ChatGPT itself. They are generating output that could involve disclosing sensitive information, producing inappropriate content, and executing harmful code.

Should I be worried?

Although new techniques are emerging all the time, there has not been a significant spike in AI-influenced cybersecurity incidents. As always, you are strongly advised to stay alert – the malware may change, but the methods used to infect your devices remain the same.

And no matter how cybersecurity threats change, antimalware will remain an important – and extremely effective – defense. Don’t leave yourself unprotected – download a free trial of Panda Dome today.

The post What is WormGPT? appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/technology/what-is-wormgpt/feed/ 0
Back-to-school cybersecurity tips for parents and children https://www.pandasecurity.com/en/mediacenter/family-safety/back-to-school-cybersecurity-tips/ https://www.pandasecurity.com/en/mediacenter/family-safety/back-to-school-cybersecurity-tips/#comments Thu, 17 Aug 2023 06:10:35 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29907

We are halfway through the summer, and kids in the USA are getting ready to return to school. Here are a few suggestions on how children and parents can avoid trouble in the new year.

The post Back-to-school cybersecurity tips for parents and children appeared first on Panda Security Mediacenter.

]]>

We are halfway through the summer, and kids in the USA are getting ready to return to school.

While it is exciting to be back in the classroom, living in a post-pandemic world also comes with a grain of salt, as parents will not always be with their children.

Schools are returning to normal; distance learning is losing pace, and teaching children how to protect themselves while developing a digital lifestyle is as important as protecting themselves in real life.

READ ALSO: Are your children ready to go back to school?

Here are a few suggestions on how children and parents can avoid trouble in the new year.

The importance of privacy

TikTok and social media have made it easy for children to have a shot at being overnight stars and develop a following while still in school. However, social media platforms like TikTok sometimes lead to addiction and desperate moves to garner attention. While content is fun to make and allows children to express creativity, parents must know who is the audience of the created content. And if videos or posts are fully public, they should not disclose any info that could identify the address or full name of the children involved.

Predators exist

Parents sometimes forget that predators likely lurk more often in the digital world than in real life. Keeping an eye on the kids while respecting their privacy is a must, especially if there is a teen in the family. Predators can be everywhere, in PC video game chats, smartphones, and even on a Nintendo Switch platform. Discussing types of alarming behavior and how kids can recognize it and report it to a parent is a must. Students might not realize how exposed they are to the outside world online, so it is the parent’s job to include some primary cybersecurity education.

READ ALSO: TBH Meaning + Online Slang Parents Should Know

Cyberbullying

Talk about cyberbullying with the kids. Children of all ages, sometimes even parents, must understand that what happens online could have real-life consequences. Long gone are the days of complete anonymity. Teach the little ones to be responsible online like they are in the real world. Untasteful behavior online could even be worse, as digital prints can haunt a person forever. Teach children not to be the victim nor the bully.

ALSO READ: 52 Alarming Cyberbullying Statistics and Facts for 2023

Social media challenges

Stay on top of the trends and act quick when kids attempt to do something unhealthy. Those often start in schools and teachers are trained to recognize harmful behavior, and alert parents if they see something. Still, teachers also often have 20+ children per class, and relying only on reports by teachers isn’t enough. Keeping an eye on what is happening in children’s digital life is a must.

Phishing attempts

No one is fully protected by phishing attempts. One way or another, hackers always find a way to successfully deliver an email or a text message to potential victims. Antivirus software solutions can successfully shield people from such criminal attempts, but even with protection, sometimes malicious content ends up ready for takers in someone’s inbox. Everyone, from senators in the government to children with school email inboxes, gets targeted by cybercriminals and everyone should know not to click on those malicious links.

READ ALSO: 11 Types of Phishing + Real-Life Examples

Antivirus software solutions often also come bundled with parental control features like Panda Dome Premium. Utilizing the tools in such protection solutions helps parents limit a child’s exposure to phishing emails, online predators, cyberbullies, and dangerous social media behavior.

The post Back-to-school cybersecurity tips for parents and children appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/family-safety/back-to-school-cybersecurity-tips/feed/ 1
How to make your instant messages ‘unhackable’ https://www.pandasecurity.com/en/mediacenter/tips/instant-messages-unhackable/ https://www.pandasecurity.com/en/mediacenter/tips/instant-messages-unhackable/#respond Mon, 14 Aug 2023 06:31:30 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29913

5 practical tips for securing instant messaging apps and keeping your private chats private.

The post How to make your instant messages ‘unhackable’ appeared first on Panda Security Mediacenter.

]]>

According to one study, 60% of people prefer instant messaging to phone calls. In fact, messaging apps are an essential tool for communication – many people rely on these services to stay in touch with friends, family, colleagues and to interact with businesses too.

This means that people often share extremely sensitive information about their health, personal life, work and relationships via instant message. It also means that it could be extremely damaging if your instant messages were ever exposed.

So what can you do to better protect your instant messages against being stolen or leaked?

1. Enable end-to-end encryption

Hackers often try to intercept data as it passes over the internet – including your instant messages. End-to-end encryption uses cryptography to encrypt messages in transit, ensuring they can only be read on your device and the recipient’s device. Encrypting messages ensures that even if hackers do manage to capture your messages, they cannot read them because they cannot decrypt them.

End-to-end encryption is available in popular messaging apps like WhatsApp, iMessage, Signal, Facebook Messenger and Telegram.

2. Set your messages to self-destruct

Some apps, like Facebook and WhatsApp allow you to auto-delete messages after they have been read (a bit like Snapchat). Enabling this feature ensures that your messages disappear within a specified time limit and that they cannot be recovered from your device or your friend’s.

Other apps, like Apple’s iMessage, allow you to auto-delete older conversations by defining a time limit in the ‘Keep messages’ setting. Any messages older than the specified time frame will be permanently deleted.

3. Double-lock your chat apps

Your phone is protected by a passcode, so why not your apps too? WhatsApp, Signal, Telegram and Facebook Messenger allow you to set an additional passcode to access the app. No passcode, no messaging.

Thieves will have to steal your phone and two passcodes if they want to read your secret messages.

Secure your profiles

All instant messaging apps allow you to block users, but most allow you to control your profile and who can message you too. Check your profile settings to see who can message you and how much information you are sharing publicly (such as location, address, profile pics) etc. The less you share publicly, the less risk of your information being misused by criminals.

Check your backups

Most messaging apps provide backups to ensure you can still read your messages when you switch devices. But if a hacker steals your backups, they may be able to recover your secret chats.

You need to know where your backups are stored and whether they are encrypted. You can then decide where the safest place to keep them is, away from hackers.

Don’t underestimate the risks

Because we use instant messaging apps for everything, they are a goldmine of valuable information for cybercriminals. By following the five steps outlined here, you can protect your privacy and secure your messages against theft.

The post How to make your instant messages ‘unhackable’ appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/tips/instant-messages-unhackable/feed/ 0
Chinese hackers stole US government emails https://www.pandasecurity.com/en/mediacenter/news/us-government-emails-stolen/ https://www.pandasecurity.com/en/mediacenter/news/us-government-emails-stolen/#respond Thu, 10 Aug 2023 06:23:53 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29904

FBI warning: Chinese hackers compromised Outlook accounts of U.S. government agency employees.

The post Chinese hackers stole US government emails appeared first on Panda Security Mediacenter.

]]>

FBI warning: Chinese hackers compromised Outlook accounts of U.S. government agency employees

In a joint statement by the CISA and FBI issued on July 12th, 2023, the federal agencies confirmed that advanced persistent threat (APT) actors managed to access and download Exchange Online Outlook data that includes info from email accounts of U.S. government employees. The news was also confirmed by tech giant Microsoft which continues to work on mitigating the attack. The tech giant identified the hacker organization, which originates in Asia and specializes in targeting government agencies in the Western world. Microsoft believes the attack came from the famous Chinese hacker group called Storm-0558.

The Federal Civilian Executive Branch (FCEB) agency first reported the cyber incident. They identified suspicious activity in the organization’s Microsoft 365 cloud environment and reported it to CISA, FBI, and Microsoft. The hackers were able to get unauthorized access to customer email accounts that use Outlook Web Access in Exchange Online (OWA) and even Outlook.com.

The criminals have had access to the email servers since mid-May. The cybercriminals pulled it off by forging authentication tokens to access user email. Microsoft believes that all the data stolen by the perpetrators is unclassified, even though the hackers specifically targeted the email accounts of high-profile individuals such as members of the House of Representatives. The identities and party affiliations of the targeted elected officials have not been publicly released.

READ ALSO: PGP Encryption: The Email Security Standard

Microsoft began investigating anomalous mail activity after customers, including FCEB, reported the problem. The investigation concluded that on May 15th, 2023, the Chinese managed their way into email accounts affecting dozens of organizations, including some government agencies and accounts of individuals associated with the targeted organizations. Microsoft began contacting affected parties, and the issue has been officially resolved – all affected Microsoft customers have been informed of the security incident. Even though Microsoft continues the investigation, the data leakage has been stopped.

The hacker attack was supported by heavy resources and was solely focused on espionage. There are no reports of ransom requests by the hacker organization, so the attack is likely state-driven. Such attacks are often considered part of the spy efforts between global superpowers such as the USA and China. Reuters reported that China’s embassy in London denied involvement in the cyber incident and called the news “disinformation.” The Chinese also stated that the USA is the world’s most enormous hacking empire and called the country a “global cyber thief.”

READ ALSO: How to make Microsoft Office (mostly) unhackable

The post Chinese hackers stole US government emails appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/news/us-government-emails-stolen/feed/ 0
Android automated SOS feature is causing serious problems https://www.pandasecurity.com/en/mediacenter/mobile-security/android-sos-feature-problems/ https://www.pandasecurity.com/en/mediacenter/mobile-security/android-sos-feature-problems/#respond Mon, 07 Aug 2023 06:12:57 +0000 https://www.pandasecurity.com/en/mediacenter/?p=29901

UK emergency services report a spike in false alarms caused by an Android feature that calls for help immediately after incorrectly detecting an action.

The post Android automated SOS feature is causing serious problems appeared first on Panda Security Mediacenter.

]]>

With every new software update, smartphone manufacturers are looking for ways to make our lives easier, more efficient and safer. So when Google released a new Android feature to automate calls to the emergency services, it was expected to be a massive success. But reports from UK police suggests that the Emergency SOS function is actually a serious problem.

What is the Emergency SOS feature?

In an emergency, the faster you can contact emergency services, the better. To make the call, you normally have to unlock your handset, open the phone app and then dial a three digit number (911 in the US, 999 in the UK or 112 in Europe) to speak to an operator.

Most of us can complete this process very quickly, but every second counts in an emergency. And what happens if you are trapped or injured and cannot do all these steps?

So Android’s developers decided they could make the process much quicker and easier with the Emergency SOS feature. When enabled, users can call the emergency services just by pressing the power button multiple times. There’s no need to unlock the phone or dial a number – just press a single button.

So what’s the problem?

It is very easy to trigger the Emergency SOS function – and this is a problem. British police report that there have been hundreds of nuisance calls to the emergency services helpline, where Android phone owners have accidentally pressed the power button repeatedly.

When a user ‘butt dials’ the emergency services helpdesk, an operator must follow-up to confirm the call was accident. According to officials, these checks can take up to 20 minutes, drawing important resources away from dealing with genuine emergencies.

What can be done?

Google is clear that smartphone handset manufacturers are responsible for how the Emergency SOS function is implemented on devices. However, they have agreed to provide manufacturing partners with additional guidance and advice which can then be passed on to customers. By simply raising awareness of how the feature can be triggered accidentally, they hope that customers will take more care with their phones.

Google also advises anyone who has made multiple 999 calls to deactivate the function for now. You can do this by opening your phone’s settings and searching for ‘Emergency SOS’, then toggling the switch to ‘Off’.

In the meantime, we look forward to the additional guidance provided by Google and how Android smartphone operators can prevent accidental calls to the emergency services.

The post Android automated SOS feature is causing serious problems appeared first on Panda Security Mediacenter.

]]>
https://www.pandasecurity.com/en/mediacenter/mobile-security/android-sos-feature-problems/feed/ 0