Posts

Web Application Vulnerability Leads to Compromised Data

Georgia Tech Data Breach

Last week, Georgia Tech announced a vulnerability in a web application that compromised 1.3 million individuals’ information, spanning from current students to alumni to employees. The vulnerability allowed unauthorized, third party access to a central Georgia Tech database. The university hasn’t released many details yet, but we do know the basics of the incident.

The Georgia Tech data breach was found in late March but the impact has been traced back to December. The vulnerability in the web application has been patched, and they are looking for any additional, unknown vulnerabilities. The university’s cybersecurity team is now conducting a forensic investigation to find out how this breach happened, especially since it’s the second breach within a year. In 2018, 8,000 Georgia Tech College of Computing students’ information was emailed to the wrong recipients because of human error.

Cybersecurity Risks in Higher Education

The Georgia Tech data breach proves, once again, that any organization can be compromised. Even a university with a leading computing program and the top cybersecurity talent can be impacted by a data breach. Georgia Tech’s relationship with technology companies and the government probably made the university an even more attractive to a target.

A data breach in the education industry costs $166 per capita, according to the Ponemon Institute. Institutions of higher education can be targeted for personally identifiable information, research, payroll information, Social Security Numbers, or for other critical assets. Most cybersecurity attacks are a matter of when it will happen to you, not if it will happen to you. The Georgia Tech data breach isn’t the first time a university has been targeted, and it won’t be the last. Is your institution doing everything it can to protect itself from attacks?

Security of Web Applications

Web applications are unique constructs, mixing various forms of technology and providing an interactive front for others to use. Some web applications are made public, while others might be internal applications existing on an intranet. No matter the location, web applications play critical functions and are susceptible to many cyber threats, as we see with the Georgia Tech data breach. To mitigate risk, web applications need to be thoroughly tested for application logic flaws, forced browsing, access controls, cookie manipulation, horizontal escalation and vertical escalation, insecure server configuration, source code disclosure, and URL manipulation, among other tests.

At KirkpatrickPrice, we want to find the gaps in your web applications’ security before an attacker does. For this reason, we offer advanced, web application penetration testing. Contact us today to learn more about how our services can help secure your web applications.

More Assurance Resources

How Can Penetration Testing Protect Your Assets?

Ransomware Alert: Lessons Learned from the City of Atlanta

Why is Ransomware Successful?

Canada’s New Breach Notification Law: Preparation and Impact

On November 1, 2018, Canada’s Data Privacy Act amended the Personal Information Protection and Electronic Data Act (PIPEDA) to include Breach of Security Safeguards Regulations. Organizations subject to PIPEDA will now have to report breaches that pose “real risk of significant harm” to affected individuals to the Office of the Privacy Commissioner of Canada (OPC). What does this new regulation mean for organizations and how can they operate in a way that supports the regulation?

Why Did Canada Introduce a New Breach Notification Law?

The entire world is stepping up its game when it comes to privacy laws because of the continual growth of personal data sharing, unauthorized disclosures, and controversial uses of personal data. PIPEDA is Canada’s federal privacy law that regulates how organizations and businesses handle personal information. Like many privacy laws, it applies when personal information is collected, used, or disposed of for commercial purposes.

The purpose of PIPEDA is similar to that of GDPR or CCPA: to facilitate growth in electronic commerce by increasing the confidence of digital consumers, and to contribute positively to the readiness of Canadian businesses. PIPEDA aims to balance the privacy rights of individuals with the legitimate needs of business. Because so many Canadian organizations are required to comply with GDPR, this new regulation will further align PIPEDA with GDPR.

What Does My Organization Need to Know About Canada’s New Breach Notification Law?

If you’re not familiar with PIPEDA, Canada’s Data Privacy Act, or the new Breach of Security Safeguards Regulations, the following basic principles will help you understand the basics of Canada’s new breach notification law:

  • PIPEDA defines a breach of security safeguards as the loss of, unauthorized access to, or unauthorized disclosure of personal information resulting from a breach of an organization’s security safeguards or from a failure to establish those safeguards.
  • PIPEDA defines significant harm as bodily harm, humiliation, damage to reputation or relationships, loss of employment, business or professional opportunities, financial loss, identity theft, negative effects on the credit record, and damage to or loss of property.
  • Whether the breach of security safeguards impacts one individual or thousands, it still needs to be reported if there is a real risk of significant harm.
  • Under PIPEDA’s accountability principle, even if an organization transfers personal information to a third party for processing purposes, it’s still responsible for the security of that personal information. Organizations must have appropriate contractual agreements in place to ensure that the relationship complies with PIPEDA.
  • Under the Breach of Security Safeguards Regulations, the contents of notification must include the description and/or cause of the breach, date or period of the breach, description of the personal information that was breached, number of individuals impacted, what the organization has done to reduce risk of harm to victims, how the organization will notify the victims, and a point of contact for information about the breach.
  • When a breach has occurred, the organization must maintain a record for a minimum of 24 months.
  • Failure to report a breach that poses real risk of significant harm could result in fines of up to $100,000 for each individual affected by the breach, if the federal government decides to prosecute a case. Under the current law, the OPC cannot issue fines or corrective actions, only advise organizations on how to make changes.

How Can Organizations Prepare?

This new breach notification law was released in April 2018, but went into effect in November, giving organizations six months to prepare themselves. Some reasonable preparation steps for your organization include the following:

  • Create a formal incident response plan that has been tested and implemented.
  • Create breach notification templates that include fields for all required content.
  • Conduct a formal risk assessment to determine the likelihood of a breach and the factors that are relevant to real risk of significant harm.
  • Perform data mapping to determine where personal information is collected, processed, or stored.
  • Assess user access activities and consider operating under a business need to know basis.
  • Stay aware of other breaches in your industry and learn from them. Don’t make the same mistakes as your competitors.

More Resources

OPC’s Tips for Containing and Reducing the Risks of a Privacy Breach

OPC’s Self-Assessment Tool for Securing Personal Information

OPC’s Breach Report Form

Getting the Most Out of Your Information Security and Cybersecurity Programs in 2019

As organizations plan their information security and cybersecurity efforts for 2019, we often hear a lot of confusion and frustration about things like frameworks modifying their requirements, the cost of audits and assessments rising, scopes getting bigger, and testing seeming to get more difficult.

The threats will do nothing but persist in 2019. You need to do more to protect your organization. When prices or scope or frequency increases, here’s what we’re going to ask you: don’t you want more in 2019 than you got in 2018?

Root Causes of Data Breaches and Security Incidents

Some things stay the same. The root causes of data breaches and security incidents center around three areas: malicious attackers, human error, and flaws in technology. Let’s dive into how these areas impact your organization’s information security and cybersecurity efforts.

  • Organized criminal groups aren’t stopping; they’re only getting more sophisticated. They’re using tried and true techniques that continue to work on victims. There’s obviously financial motivation, but a malicious attacker could also be motivated by a political agenda, social cause, convenience, or just for fun.  
  • Employees will continue to be your weakest link. Verizon’s 2018 Data Breach Investigations Report states that one in five breaches occurs because of human error.
  • As if human error wasn’t bad enough, malicious insiders are even worse. 28% of cyberattacks in 2018 involved insiders.
  • Technology is a blessing and curse. Systems glitch and cause major data breaches and security incidents.
  • It’s almost impossible to run a business without involving third parties. Inevitably, third parties cause data breaches and security incidents, and your organization must deal with the consequences.  
  • Timing is everything when it comes to data breaches and security incidents, and hackers are usually quicker than your team. Ponemon’s 2018 Cost of a Data Breach Study reports that the average time to identify a data breach was 197 days in 2018. To actually contain the breach? 69 days.

These root causes, all connected to malicious attackers, human error, and flaws in technology, impact your organization’s information security and cybersecurity efforts in a significant way. Did you experience a negative impact from these areas in 2018? How are you going to mitigate the risks in these areas for 2019?

Cost of a Data Breach

There’s no denying that information security and cybersecurity efforts require a financial investment, but so do data breaches and security incidents. According to Ponemon, the average total cost of a data breach was $3.86 million in 2018 – a 6.4% increase from 2017. You can bet that in 2019, that number will grow again.

Organizations are usually surprised that the following elements drive up the cost of a data breach:

  • Loss of customers
  • Size of the breach
  • Time it takes to identify and contain a data breach
  • Effective incident response team
  • Legal fees and fines
  • Public relation fees
  • Information security and cybersecurity program updates

Take the City of Atlanta, for instance. When the SamSam ransomware attack hit in March of 2018, it was initially estimated to cost $2.6 million in emergency response efforts. Incident response consulting, digital forensics, crisis communication, Microsoft expertise, remediation planning, new equipment, and the actual ransom cost added up quickly. It’s now speculated that this ransomware attack cost $17 million.

As the cost a of data breach rises, so does the cost of information security auditing and testing. The threats are pervasive – how can you make a smart investment to avoid the cost of a data breach?

Your Plan for 2019

Now that you’ve learned about the persistent root causes of data breaches and security incidents, plus the cost of a data breach, what are you going to do about it in 2019? How are you going to modify your information security and cybersecurity efforts? Here are a few areas to consider as we head into a new year:

  • When was the last time you performed a formal risk assessment? Risk assessments can provide you with what we call the three C’s: confidence, clear direction, and cost savings.
  • If your weakest link is employees, how will you hold them accountable to their security awareness training?
  • Ponemon reports that when an organization has an incident response team, they save $14 per compromised record. Has your incident response plan been tested recently?
  • What security automation tools would be a valuable investment for your organization? According to Ponemon, security automation is a way to decrease the cost of a data breach because you’re able to identify and contain the attack faster.
  • Ask your auditing firm to educate you on what new cybersecurity testing exists and which relevant requirements will be changing in 2019.

No defense is 100% effective. There are no guarantees that a data breach or security incident won’t occur. Organizations must be vigilant in doing what they can to prepare, detect, contain, and recover from persistent and sophisticated threats. Auditing firms must also commit to providing quality, thorough services that will empower organizations to meet their challenging compliance objectives. At KirkpatrickPrice, that’s our mission and our responsibility. Contact us today to discuss how we can prepare your organization for the threats of 2019.

More Data Breach and Incident Response Resources

What Is an Incident Response Plan? The Collection and Evaluation of Evidence

[24]7.ai Cyber Incident: How Your Vendors Can Impact Your Security

Rebuilding Trust After a Data Breach

Horror Stories: Million Dollar Malware Losses

California Consumer Privacy Act vs. GDPR: What Your Business Needs to Know

Data Privacy and Security in the US

According to Pew Research Center, 64% of American adults have experienced data theft. Yahoo, eBay, Equifax, Target, Anthem, Home Depot – it has become habitual to worry about data breaches, identity theft, and other privacy concerns. With every new headline of a data breach, it seems like consumers are losing more control over what personal information is publicly available.

At the same time, it’s nearly impossible to go through an ordinary day without sharing personal information. There are businesses out there that know where you live, how fast you drive, how many hours of sleep you got last night, if you’re on-budget for the month, what type of music you listen to, how many times you’ve tweeted this month, if you’re meeting your fitness goals, and how many children you have – just to name a few categories. With the complexity and sophistication of the current threat landscape, regulators, lawmakers, and consumers must be more alert than ever. In 2018, numerous states have added or updated data privacy and breach notification laws, including:

  • The Alabama Breach Notification Act of 2018 went into effect on June 1, 2018 to heighten consumer protections.
  • Arizona amended its breach notification law, HB 2145, to expand the definition of personal information and refine notification timelines.
  • Colorado enhanced consumer protections through amendments to HB 1128, which went into effect on September 1, 2018.
  • Ohio passed The Data Protection Act, a scalable bill that focuses on businesses’ cybersecurity programs.
  • Iowa passed HF 2354 to regulate the protection of student information when used on an online service or application.
  • Louisiana amended Act No. 382 to create a more comprehensive data privacy and breach notification law.
  • Nebraska passed LB 757, a bill requiring “reasonable security procedures and practices” to provide consumer protection.
  • Oregon amended SB 1551 to extend the scope of its breach notification rules and went into effect on June 2, 2018.
  • The South Carolina Insurance Data Security Act, which goes into effect on January 1, 2019, emphasizes the need for cybersecurity programs and incident response plans in the insurance industry.
  • South Dakota enacted its first breach notification law in SB No. 62, effective on July 1, 2018.
  • Vermont passed 764, which will regulate data brokers’ information security program and data privacy practices.
  • Virginia extended its breach notification law, HB 183, to include information tax information.

The California Consumer Privacy Act of 2018 has stood out among state laws, though. Let’s discuss what this law is and why it is being perceived as the US equivalent of GDPR.

Introducing the California Consumer Privacy Act of 2018

In June, California Governor Jerry Brown signed into law AB 375, enacting The California Consumer Privacy Act of 2018 (CCPA). Despite opposition from industry leaders like Google, Verizon, Comcast, and AT&T, approximately 629,000 Californians petitioned to get the law on the ballot, and now, Californians have been granted the most comprehensive consumer privacy rights in the country. This is evidence that consumers want ownership, control, and security over their personal data.

The purpose of CCPA is to give consumers more rights related to their personal data, while also holding businesses accountable for respecting consumers’ privacy. Because of California’s reputation as a hub for technology development, this law speaks to the needs of its consumers which continue to evolve with technological advancements and the resulting privacy implications surrounding the collection, use, and protection of personal information.

For-profit businesses that do business in California and that fall under any of the following categories must comply with the CCPA:

(A) Have annual gross revenues of over $25,000,000,

(B) Buy, sell, or share the personal information of 50,000+ consumers per year

(C) Derive 50% or more of their annual revenues from selling consumers’ personal information

Has the GDPR Made Its Way to the US?

The European Union’s legislation, the General Data Protection Regulation (GDPR), has been a top regulatory focus of 2018, even among US companies. The first globally relevant data privacy regulation of its kind, GDPR is considered to be one of the most significant information security and privacy laws of our time. GDPR applies to any entity collecting, using, or processing personal data of any data subject in the EU, which means that the applicability of the law follows the data, wherever in the world that data resides.

California Consumer Privacy Act vs. GDPR: What Your Business Needs to Know

We do see some similarities between GDPR and CCPA, especially in their purpose and definitions. Both GDPR and CCPA are heavily focused on consumers’ desire for privacy and control over their personal information. After reviewing both laws, you’ll find regulators designed both to give consumers more rights and hold businesses accountable for respecting consumers’ privacy. You’ll also notice that the two laws’ definitions for the terms “processing” and “personal information” closely align.

Many of the best practices that organizations are using to comply with GDPR will be effective when beginning to comply with CCPA. Data mapping, documentation review, contract management – these activities will assist organizations in their compliance journeys. Additionally, CCPA may become a model for other state privacy laws or even a federal privacy law, so compliance with CCPA may give organizations an advantage for compliance with other state or federal privacy laws.

If GDPR or CCPA applies to your business, we encourage you to begin your preparation by following the data, starting the paper chase, performing thorough internal documentation review, and identifying which security standards are appropriate for your organization. Contact us today for more information on how to comply with state laws or GDPR.

More Resources

The Cost of GDPR Non-Compliance: Fines and Penalties

10 Key GDPR Terms You Need to Know

What NY CRR 500 Means for Vendor Compliance Management

What is Cybersecurity?

Horror Stories: Facebook Fallout

In late September, Facebook gave a new security update, outlining a breach that has impacted 50 million users – Facebook’s largest breach ever. The social network has been under intense scrutiny this year after the Cambridge Analytica scandal and has been redirecting their security team since the departure of their chief security officer, Alex Stamos. With the midterm elections coming up, this massive breach couldn’t have come at a worse time for Facebook. Users, regulators, lawmakers, and competitors are watching to see how Facebook improves the way it handles the private data of its users and how the social network giant handles this latest breach. Many believe it is time for the government to step in, and others are focusing on the GDPR implications of this breach.

Facebook’s Largest Breach: What Happened?

Even this early on in the investigation, Facebook knows that the attack stemmed from the “View As” feature, which impacted access tokens. Specifically, hackers exploited a combination of three bugs: one in a post composer for birthday posts, one in a new version of a video uploader, and one when using the “View As” feature in conjunction with the video uploader. In their security update, Facebook reported, “When using the View As feature to view your profile as a friend, the code did not remove the composer that lets people wish you happy birthday; the video uploader would generate an access token when it shouldn’t have; and when the access token was generated, it was not for you but the person being looked up. That access token was then available in the HTML of the page, which the attackers were able to extract and exploit to log in as another user. The attackers were then able to pivot from that access token to other accounts, performing the same actions and obtaining further access tokens.”

To quickly fix the vulnerability, Facebook reset the access tokens of the 50 million impacted accounts, plus reset another 40 million accounts as a precautionary measure. As a result, users had to log back into their account, then see a notification in their News Feed explaining the security incident. Facebook also switched off the “View As” feature during their security review. As the investigation continues, Facebook must provide transparency about three elements of this breach: if accounts or data were misused, who the attackers were, and if third-parties were impacted.

Facebook needs to clearly announce whether accounts were misused or if any private information was accessed during this breach. All we know so far is that the attackers retrieved basic profile information like name, gender, or hometown. Guy Rosen, vice president of product management at Facebook, explained in a press call, “…We don’t know exactly how – which and how – what information we will find has been used. What we’ve seen so far is that access tokens were not used to access things like private messages, or posts, or to post anything to these accounts and we’ll update as we learn more…what we also can confirm is that no credit card information has been taken. We do not display credit card information, even to account holders.”

The public also wants to know who these hackers are and who they’re supported by. Guy Rosen explained in a press call, “Given this investigation’s still early, we haven’t yet been able to determine if there’s specific targeting. It does seem broad and we don’t yet know who is behind these attacks or where there’s base – or where they might be based…The investigation is early, and it’s hard to determine exactly who was behind this, and we may never know. This is a complex interaction of multiple bugs that happened together. It did – it did need a certain level in order for the attacker to run this attack in a way that not only gets access tokens, but then pivots on those access tokens and continues to further – get further access tokens using this mechanism.”

Facebook must also investigate if any third-party services that use its single sign-on function were impacted by this breach. So far, Facebook hasn’t found evidence of third-parties becoming compromised. Thousands of companies use this identity provider function, like Spotify, Instagram, Airbnb, Pinterest, GoFundMe, Headspace, and others. Guy Rosen stated that WhatsApp users are not impacted by this breach, but Tinder has called on Facebook for transparency and full disclosure during their investigation to better support third-parties in their own investigations.

Midterm Elections, GDPR Implications, and Facebook’s Reputation

There seems to be two conversations surrounding Facebook’s latest breach: how this attack reflects Facebook’s preparation for the midterm elections and how this attack needs to be handled in terms of GDPR.

With the midterm elections coming up and the Cambridge Analytica scandal in the rearview, users, regulators, lawmakers, and competitors are watching to see how Facebook is protecting itself from election interference. In fact, two weeks before this breach, Mark Zuckerberg posted Preparing for Elections, a blog post addressing exactly that – Facebook’s defense against election interference. It calls for enforcement over fake accounts, the spreading of misinformation, and advertising transparency and verification. It also speaks of coordination with governments and industries across the globe. Zuckerberg wrote, “While we’ve made steady progress, we face sophisticated, well-funded adversaries. They won’t give up, and they will keep evolving. We need to constantly improve and stay one step ahead. This will take continued, heavy investment in security on our part, as well as close cooperation with governments, the tech industry, and security experts since no one institution can solve this on their own.”

In the wake of this latest breach, is Facebook’s defense plan enough?

With GDPR in mind, Facebook notified the FBI and the Irish Data Protection Commission of this breach. Many suspect that if not for the GDPR’s breach reporting requirements, Facebook wouldn’t have notified the public about this breach until there were more details about the scope of who was impacted and where the attack came from. From the Irish Data Protection Commission’s tweets, we can gather that they are not satisfied with the level of detail provided in Facebook’s breach report. Organizations worldwide need to recognize how strict GDPR’s breach reporting requirements are and what penalties they could face.

During a press call, the New York Times asked Zuckerberg, “I’m just thinking back to your testimony in congress and one of the main points you made was if Facebook’s here to serve its users and if you can’t be responsible with user data then you don’t deserve to serve users. And I guess I’m just wondering if you still think you all are able to do that because it just — it seems like a pretty — another pretty big breach of user trust?” This is the exact question so many are wondering. If Facebook takes a hit from any more breaches or incidents, how will users, regulators, lawmakers, and competitors react?

More Resources

Facebook’s Morning Press Call Transcript

Facebook’s Afternoon Press Call Transcript

Twitter’s Election Integrity Update

7 Deadly Breaches of 2018 (So Far)