Every user action can and should be tracked. On cloud platforms like AWS, user actions and service events interact with the platform’s management interfaces, whether with the web console or the API, which allows most things that happen in your cloud environment to be logged. 

The transparency provided by comprehensive logging is one of the cloud’s most consequential security and compliance benefits. Using logs allows you to record all processing data so that you can track access and user actions to identify potential errors. Businesses that use AWS must also understand how to leverage the platform’s tools to achieve the visibility they need to improve security, compliance, and governance through logging. AWS CloudTrail is one of the foremost logging tools offered today to help you achieve that visibility. 

What Is AWS CloudTrail?

AWS CloudTrail is a logging service that records account activity across your AWS environment. When users, roles, or services carry out an action, it is recorded as a CloudTrail event. You can view events in the  CloudTrail console’s event history interface, and, by default, CloudTrail retains logs for the last 90 days. 

AWS CloudTrail Best Practices

As with all AWS services. users must configure AWS CloudTrail correctly to leverage its security, governance, and compliance capabilities. The best practice tips below will allow you to optimize your use of AWS CloudTrail.

Create a Trail

While CloudTrail provides some useful logging capabilities out of the box, creating a trail makes the service far more capable, comprehensive, and configurable. Trails allow you to specify where your monitored resources and recorded events will be sent.  These are sent as log files to an Amazon S3 bucket that you specify.  CloudTrail stores events as a JSON object with information such as the time at which an event occurred, who made the request, the resources that were affected, and more.

This is particularly important for companies that require a permanent long-term record of cloud activity for compliance purposes Without a trail, CloudTrail deletes logs after 90 days. 

Enable CloudTrail in All Regions

Unless a trail is intended to focus exclusively on a specific region, you should enable CloudTrail logging for all regions. Enabling CloudTrail for all regions maximizes insight into activity on your AWS environment and ensures that issues don’t go unnoticed because they occur in an unlogged region. 

Ensure CloudTrail Is Integrated With CloudWatch

CloudTrail is most useful if it is integrated with AWS CloudWatch. While CloudTrail generates and stores comprehensive logs, they aren’t actionable unless they are available to users in a form that is easy to interpret and analyze. That’s CloudWatch’s primary role; it allows users to visualize and analyze logs and provides sophisticated alerting and automation capabilities based on logged events. 

Store CloudTrail Logs in a Dedicated S3 Bucket

CloudTrail stores trails in an S3 bucket. As we’ll see in a moment, it’s essential to control access to this bucket because it contains information that could be useful to a malicious actor. Implementing an effective access policy for CloudTrail logs is easier if they are stored in a dedicated bucket used only for that purpose. 

Enable Logging on the CloudTrail S3 Bucket

Amazon S3’s server access logs record bucket access requests, helping administrators to understand who has accessed CloudTrail logs, information that may be useful during compliance audits, risk assessments, and security incident analysis. We recommend configuring the CloudTrail S3 bucket to generate server access logs and store them in a different bucket, which also has secure access controls. 

Configure Least Privileged Access to CloudTrail Logs

As we have discussed in previous articles on AWS security, S3 buckets are often misconfigured so that their contents are publicly accessible. Exposing sensitive log data in this way creates a critical vulnerability. S3 buckets that store CloudTrail logs should not be publicly accessible. Only AWS account users who have a well-defined reason to view logs should be given access to the bucket, and access permissions should be reviewed regularly. 

Encrypt CloudTrail Log With KMS CMKs

CloudTrail logs are encrypted by default using S3-managed encryption keys. To gain greater control over log security, you can instead use encryption with customer-created master keys (CMK) managed in AWS Key Management Services. 

There are several benefits to using CMKs instead of the S3’s default server-side encryption. CMK’s are under your control, so you can rotate and disable them. Additionally, CMK use can be logged by CloudTrail, providing a record of who used the keys and when they used them. 

Use CloudTrail Log File Integrity Validation

AWS CloudTrail logs play an essential role in the security and compliance of your AWS environment. As such, you must be able to determine the integrity of log files. If a bad actor gains access to AWS resources, they may delete or edit logs to obscure their presence. CloudTrail log file validation generates a digital signature of log files uploaded to your S3 bucket. The signature digest files can be used to verify that logs have not been edited or otherwise tampered with. 

Define a Retention Policy for Logs Stored in S3

CloudTrail trails are stored indefinitely, which may be the right approach for your business. However, if you have different compliance or administrative requirements, you can set a retention policy using S3’s object lifecycle management rules. Management rules can archive log files to an alternative storage service, such as Amazon Glacier, or automatically delete them once they exceed the required retention period. 

AWS Cloudtrail FAQs

What are Some Common Mistakes to Avoid When Setting Up CloudTrail?

When setting up CloudTrail, there are some common mistakes that affect its effectiveness. One common mistake is not enabling CloudTrail in all regions where AWS services are being used. It is important to enable CloudTrail in every region to ensure comprehensive coverage of API activity.

Another mistake is not regularly reviewing and analyzing CloudTrail logs. It is essential to regularly monitor the logs to detect any suspicious activity or unauthorized access.

Additionally, not setting up proper permissions and access controls for CloudTrail can lead to security vulnerabilities. It is crucial to restrict access to CloudTrail logs to only authorized personnel.

Lastly, not integrating CloudTrail logs with other security tools and services can limit its effectiveness in threat detection and incident response. By integrating CloudTrail with other tools, organizations can enhance their overall security posture.

By avoiding these common mistakes, organizations can maximize the benefits of CloudTrail in enhancing security and compliance within their AWS environments.

What Functionality Does CloudTrail Processing Library Provide?

The CloudTrail Processing Library offers a comprehensive set of features aimed at simplifying the processing of CloudTrail logs. It enables users to perform tasks like regularly checking an SQS queue, interpreting messages from SQS, retrieving log files stored on S3, and efficiently analyzing the events contained in these log files with a strong emphasis on fault tolerance.

For a deeper understanding of its capabilities and detailed usage instructions, readers are encouraged to refer to the user guide segment within the CloudTrail documentation.

How Can I Optimize My CloudTrail Setup for Cost Efficiency?

One way to optimize your CloudTrail setup for cost efficiency is to carefully configure the data events that you want to monitor. By selecting only the necessary data events, you can reduce the amount of logs generated and stored, ultimately lowering your costs. Additionally, you can set up log file validation to ensure that only valid log files are delivered to your S3 bucket, avoiding unnecessary charges for invalid or corrupted logs.

Another cost-saving measure is to utilize CloudTrail Insights, which automatically analyzes CloudTrail logs to identify and alert you to unusual activity. By proactively addressing potential security threats, you can prevent costly security breaches and minimize the impact on your organization.

Furthermore, consider enabling CloudTrail data event logging in specific AWS regions where your resources are located rather than globally. This targeted approach helps reduce unnecessary logging and storage costs associated with regions where you do not have any resources.

By implementing these cost optimization strategies, you can effectively manage your CloudTrail expenses while still maintaining a high level of security and compliance in your AWS environment.

How Does CloudTrail Help with Security and Compliance?

CloudTrail helps with security and compliance by providing a detailed history of API calls made within an AWS account. This audit trail can be used to track changes, investigate security incidents, and ensure compliance with regulations and internal policies. By monitoring and logging all API activity, CloudTrail helps organizations identify unauthorized access, detect unusual behavior, and maintain a secure environment.

Additionally, CloudTrail logs can be integrated with other security tools and services to enhance threat detection and incident response capabilities. Overall, CloudTrail plays a crucial role in enhancing the security posture of AWS environments and facilitating compliance with industry standards.

Are Your Business’s AWS CloudTrail Logs Secure and Compliant

As a licensed CPA firm specializing in information security auditing and consulting, KirkpatrickPrice can help your business verify its cloud configurations, including CloudTrail configurations, through the following services: 

  • AWS Security Scanner: an automated cloud security tool that performs over 50 checks on your AWS environment, including controls related to AWS CloudTrail security.
  • Cloud security assessments: expert assessments to verify your cloud environment is configured securely. 
  • Cloud security audits: Comprehensive cloud audits that test your AWS, GCP, or Azure environment against a framework based on the Center for Internet Security (CIS) benchmarks. 

Contact a cloud security specialist to learn more about how KirkpatrickPrice can help your business to enhance and verify the security, privacy, and compliance of its cloud infrastructure.

The United States Healthcare and Public Health (HPH) sector is facing a dramatic increase in cyber-attacks that are disrupting patient care and safety.  Hospitals are facing directly targeted ransomware attacks that aim to disrupt clinical operations.

According to a new study (linked below) by the U.S. Department of Health and Human Services (HHS), 96% of small, medium, and large sized hospitals claim they are operating with end-of-life operating systems or software with known vulnerabilities, which is inclusive of medical devices. Because of varied adoption of critical security features and processes, along with the continually evolving threat landscape, more and more hospitals are being exposed to cyber-attacks.

If this fear resonates with you, we understand how you feel. We know that providing quality and uninterrupted patient care is your number one priority. It’s overwhelming to keep up with new security adoptions & implementations, especially with countless critical devices that are connected to your network and could result in the disruption of patient care. Throw in today’s evolving threats and there’s even more to keep up with.

With so many things to keep track of, you need someone to tell you what to be concerned about and how to protect yourself against it before losing billions of dollars or even a life. That’s where threat informed defense comes in – it will give you a clear game plan for gaining assurance.

To create this game plan, we need to know what we’re up against.  On April 17, 2023, The U.S. Department of Health and Human Services (HHS) 405(d) Program announced the release of its Hospital Cyber Resiliency Initiative Landscape Analysis. This analysis reviewed the active threats that hospitals are currently facing, and the cybersecurity capabilities of hospitals across the country.  The report set out to identify the biggest threats hospitals and patient care are up against before identifying the controls and cyber practices that need to be in place to mitigate those threats. 

This blog will explore the threats identified by the report, and another will dive into the best practices healthcare organizations need to have in place to protect their patients from these threats.  Both will provide practical steps for strengthening your cyber defenses.

How Do Cyber Attacks Impact Healthcare Providers?

From small, independent practitioners to large, integrated health systems, cyber-attacks on healthcare records, IT systems, and medical devices have affected even the most protected systems. Cyber-attacks expose sensitive patient information and can lead to substantial financial costs to regain control of hospital systems and patient data.

According to the Landscape Analysis, “cyber incidents affecting hospitals and health systems lead to extended care disruptions caused by multi-week outages; patient diversion to other facilities; and strain on acute care provisioning and capacity, causing cancelled medical appointments, non-rendered services, and delayed medical procedures (particularly elective procedures).

More importantly, they put patients’ safety at risk and impact local and surrounding communities that depend on the availability of the local emergency department, radiology unit, or cancer center for life-saving care.”

When these cyber incidents impact operations, they impact the health and safety of the patients who trusted their healthcare providers. Patients deserve to trust their providers, and providers deserve to feel confident that they can provide the highest quality of care as well as keep patient data confidential.

The Top Five Threats Your Organization Needs to Know About

The Landscape Analysis reviewed active threats attacking hospitals and the cybersecurity capabilities of U.S. hospitals. Many types of threats were identified, but there are five your organization needs to know about to be truly prepared:

1. Ransomware Attacks

Ransomware attacks are a type of malware that are designed to access an organization or user’s data and deny access to the data until a ransom is paid.  Attackers will encrypt the stolen data and require organizations to pay for the decryption key.   

2. Cloud Exploitations by Threat Actors

Threat actors are targeting cloud infrastructures to gain access to sensitive data being transferred between organizations and their cloud providers.  By deploying a variety of tools onto vulnerable servers, hackers can exploit this transfer of data for their own gain.

3. Phishing/Spear-Phishing Attacks

Phishing is any effort from an attacker to gain sensitive information from an individual via email, social media, or even phone calls.  By misleading employees into providing private information, malicious individuals can gain access to company systems, processes, or data. These attacks are not personalized. Instead, they are mass-generated with the hope at least one individual will fall for the trap. 

4. Software and Zero—Day Vulnerabilities

A zero-day vulnerability means your device or system is being targeted by a threat that is unknown to your developers, and the vulnerability is left unpatched. Because they’re unknown, they can be very difficult to detect and easy for hackers to exploit.

5. Distributed Denial of Service attacks (DDos)

A Denial of Service (DoS) attack attempts to flood your network or servers with so many requests that it renders your server unusable, causing your website to crash. 

There are two types of DoS attacks: buffer overflow and flood attacks.  A buffer overflow attacks CPU time, hard disk space, and memory.  Flood attacks overload the server capacity. 

In a Distributed Denial of Service (DDoS) attack, the flooding attempts come from multiple sources.  

Discover Your Vulnerabilities Before an Attacker Does

These threats are intimidating and scary, but there is no better protection against them than discovering where your weaknesses are before an attack does. The good news is that most of these attacks have similar attributes, and we know how the bad guys attack.

Healthcare providers can turn the tables on their adversaries and use their own characteristics and behaviors to validate and improve their defenses. Integrating threat informed offensive security capabilities into your defenses allows your environment to be assessed through the eyes of your greatest adversaries.

Partnering with a team of experts to research, model, and execute attack tactics, techniques, and procedures (TTPs) allows you to adjust your existing defenses to prevent any malicious efforts from affecting or damaging your data.  We understand how overwhelming and hard it is to keep up, but we also know just what to do to help you get ahead of the mess and fortify your defenses.

The only way you can really be confident that your organization is prepared to face these threats is to undergo an attack simulation.  When your organization chooses to participate in a penetration test, you can see how the controls you’ve put in place will stand up to the very-real threats you’re facing in a not-so-real simulation.

Here’s how it works:

  1. Make an attack plan
    • Partner with an expert to get a custom game plan on what you should test and how to execute your attack simulation. We begin with an initial workshop that’s focused on gaining knowledge of your threats and cyber capabilities. From there, we research your attack surface and TTPs that will inform a plan that aligns with your objectives.
  2. Test your security
    • Experience how your security defenses respond to a simulated cyber attack by an advanced ethical hacker. Our penetration testers will use their expertise and intuition to execute their TTPs, assess your attack surface, and discover any vulnerabilities within your security stance. This is normally done collaboratively working with defenders during the test, which allows for real time adjustments to defense and detection capabilities.
  3. Fortify your defenses
    • After the exploit, our professional writing team will deliver a report that gives insight into our team’s findings and their recommendations on defense, detection, and response improvements. After your remediations, our team will retest to assure that you’ve fortified your attack surface from future attacks.

Fortify Your Defenses with Offensive Security at KirkpatrickPrice

The KirkpatrickPrice approach to attack simulation prepares your organization to defend against the threats you are the most concerned about. Collaborative testing provides real time feedback on both mitigation and detection capabilities, clarity on gaps in your defenses, and ultimately assurance that your organization is prepared to withstand an attack.

Threats never stop.  Connect with one of our red team experts today so you can face them confidently.

In April 2016, the American Institute of Certified Public Accountants (AICPA) made an important update to the attestation standards that will affect your next SOC 1 audit. Statement on Standards for Attestation Engagements (SSAE) No. 18, Attestation Standards: Clarification and Recodification provides changes to SOC 1 audits and how attestation engagements are categorized.

Below, we explore the reason for this change and how the SSAE 18 affects you

What is SSAE 16?

SSAE 16 is the Statements on Standards for Attestation Engagements no. 16. It provides a set of standards and guidance for attestation reporting on organizational controls and processes at service organizations.

Audits using SSAE 16 generally result in System and Organizational Control (SOC 1) reports. Unlike earlier standards, SSAE 16 requires a written attestation from a service company’s management, stating that its description accurately represents organizational systems, control objectives, and operational activities that affect customers. However, in 2017, the SSAE 18 superseded the SSAE 16.

What is SSAE 18?

SSAE 18 is the current set of standards and guidance for reporting on organizational controls and processes at service organizations. It supersedes SSAE 16 through updated and simplified standards.

Like SSAE 16, SSAE 18 is used in SOC 1 reports (along with SOC 2 and SOC 3 reports), which were previously conducted under AT 101. Among other changes, SSAE 18 additionally requires that service organizations identify subservice organizations and provide risk assessments to auditors.

Learn more at our Guide to SAS 70 vs SSAE 16 vs SSAE 18.

SSAE 16 vs. SSAE 18: What’s the Difference?

SSAE and SOC are often used interchangeably. However, the two are distinct, and it’s useful to understand the difference. 

  • SSAE 18 — SSAE is the Statement on Standards for Attestation Engagements no. 18. As the name suggests, it outlines standards and guidance for completing attestation engagements. These are the standards and processes CPAs follow when carrying out SSAE examinations.
  • SOC is the System and Organization Controls report. It is the report that CPAs produce after conducting an attestation engagement under the SSAE 18 standards.

Essentially, SSAE refers to the standards, and SOC refers to the report.

In 2016, the AICPA updated the Statement on Standards for Attestation Engagements No. 16 (SSAE 16) to No. 18 (SSAE 18). This change simplified and converged attestation standards related to SOC 1 audits.

Additionally, the SSAE 18 also expanded to cover more types of attestation reports (including SOC 2), whereas SSAE 16 was limited to only SOC 1 reports.

What was the purpose of SSAE 16?

The purpose of SSAE 16 was to provide a framework, issued by the AICPA, that SOC 1 audits could follow. It actually means the Statement on Standards for Attestation Engagements No. 16.

Each new Statement on Standards for Attestation Engagements  helps to simplify and converge attestation standards to unify with international standards and new technology.

Why the Change From SSAE 16 to SSAE 18?

The AICPA is making some changes to how we define attestation engagements, like the SSAE 16. Even though change can be challenging, this SSAE 18 update simplifies and converges attestation standards to unify with international standards.

Convergence

The Auditing Standards Board (ASB) is converging standards to unify them with international standards. As a result, regardless of which region of the world you’re in, the standards remain accepted and unified.

For example, if you are conducting business in Europe, you may have been issued an ISAE instead of an SSAE. Similarly, if conducting business in Canada, you may have been issued a CSAE.

Simplification

Another reason behind the shift from SSAE 16 to SSAE 18 is simplification. The attestation (AT) section of the AICPA professional standards (dealing with attestation engagements) contains several different standards. These AT sections are issued in the form of Statements on Standards for Attestation Engagements (SSAE) and comprised of several SSAEs dealing with various engagements.

SSAE 18 Sections

The AIPCA is taking these different sections and putting them into one source. A lot of the older, earlier numbers are going away and being re-categorized and codified into one, the SSAE 18. Those sections are:

  • AT sec. 20
  • AT sec. 50
  • AT sec. 101 (This was the standard we used in SOC 2 engagements)
  • AT sec. 201
  • AT sec. 301
  • AT sec. 401
  • AT sec. 601
  • AT sec. 701
  • AT sec. 801 (This was the standard we used in SOC 1/SSAE 16 engagements)

The following AT sections are being codified into one SSAE 18:

  • AT-C sec. 105 (SOC 1 and SOC 2)
    • This section deals with Concepts Common to All Attestation Engagements
  • AT-C sec. 205 (SOC 1 and SOC 2)
    • This section deals with Examination Engagements
  • AT-C sec. 210
    • This section deals with Review Engagements
  • AT-C sec. 215
    • This section deals with Agreed-Upon Procedures Engagements. In other words, you may have a client that is asking for an independent audit to perform these procedures on their behalf and prepare a report. This engagement was separate prior to the SSAE 18.
  • AT-C sec. 305
    • This section deals with Prospective Financial Information.
  • AT-C sec. 310
    • This section deals with Reporting on Pro Forma Financial Information
  • AT-C sec. 315
    • This section deals with Compliance Attestations and provides guidance on how to perform compliance engagements that attest to compliance with laws and regulations. If you need an independent auditor to confirm that you’re compliant with HIPAA regulations or CFPB, for example, the auditor would refer to this section. The engagement that we used to call an SSAE 16 will now simply be referred to as a SOC 1 and will not be called SSAE 18.
  • AT-C sec. 320 (SOC 1)
    • This section deals with Reporting on an Examination of Controls at a Service Organization Relevant to User Entities’ Internal Control Over Financial Reporting
  • AT-C sec. 395
    • This section deals with Management Discussion and Analysis

The two engagements that we encounter the most are AT-C sec. 205 (SOC 1, SOC 2, HITRUST, CSA) and AT-C sec. 320 (SOC 1).

AT-C sec. 205 is applicable for independent subject matter that has been published that an independent auditor can use to attest to the fact that the client is complying with the controls in CSA or HITRUST. AT-C sec. 320 deals specifically with reporting on internal control over financial reporting.

We most commonly see this with payment processors, collection agencies, data centers, or hosting systems who are hosting or running accounting or accounts receivable on behalf of clients. Those service organizations are responsible for the physical and environmental controls that may impact a clients’ financial reporting.

SSAE 16 is only valid through April 2017. As of May 1st, 2017, these reports will be referred to as SOC 1, not SSAE 18.

What are the Changes to SOC 1 Audits With SSAE 18?

Stronger focus on Risk Assessment

Three main changes to SOC 1 audits occurred. The first of the changes to SOC 1 audits is that they now have a stronger focus on risk assessment.

Over the last few years, data breaches have exponentially increased. The number of successful phishing attempts increased four-fold on personal email accounts versus corporate accounts as attackers view individuals as easy targets, giving more opportunities to damage and steal information.

The current threat landscape requires thoroughly addressing organizational risks. Several segments of the SOC 1 audit standard include strong language around risk identification and risk management, which we interpret as a formal and documented risk assessment.

Here is some example language from the standard that alludes to requiring a formal risk assessment process:

  • The SOC 1 audit standard now requires that Management acknowledges and accepts its responsibility for identifying the risks that threaten the achievement of the control objectives stated in the description and designing, implementing, and documenting controls that are suitably designed and operating effectively to provide reasonable assurance that the control objectives stated in the description of the service organization’s system will be achieved.

KirkpatrickPrice urges clients to heavily involve management in the risk assessment process because they must acknowledge and accept responsibility for identifying and mitigating risks that threaten the achievement of the control objectives stated in management’s description.

  • Auditor must verify if management properly identified all risks that threaten the achievement of the controls objectives stated in management’s description.

The SOC 1 audit now requires that auditors identify whether all risks were appropriately identified and addressed and determine what is missing. If a formal risk assessment process has not taken place, the auditor will likely uncover gaps and insufficiencies.

  • Auditor must obtain an understanding of management’s process for identifying and evaluating the risks that threaten the achievement of the control objectives and assessing the completeness and accuracy of management’s identification of those risks.

The SOC 1 standard used to say “formal or informal” risk assessment process, but now, the SOC 1 is asking auditors to understand management’s process and assess if it is complete and correct.

  • Auditor must evaluate the linkage of the controls identified in management’s description of the service organization’s system with those risks and determine that the controls have been implemented.

Your auditor must attest to whether the appropriate controls are in place.

  • The auditor also must evaluate whether such information is sufficiently reliable for the service auditor’s purposes by obtaining evidence about its accuracy and completeness and evaluating whether the information is sufficiently precise and detailed.

Your auditor will be determining whether your risk assessment process is accurate and complete, which indicates that a formal risk assessment is necessary. They are also required to obtain evidence that the information provided is reliable.

Monitoring Subservice Organizations

The last SOC 1 audit update now requires service organizations to monitor control effectiveness at a subservice organization. As a result, service organizations now not only identify the critical organizations they rely on to provide their services, but also monitor that they, too, are complying with all relevant standards.

We have many clients who outsource or supplement internal staff with a third party to perform critical business operations. Service organizations are now required to manage their subservice organizations’ compliance. As a result, they must include a combination of ongoing monitoring (ensuring potential issues are identified in a timely manner) and separate evaluations (determining the effectiveness of internal control is maintained over time).

Organizations must understand vendor risks and ensure they meet the control objectives in the description. Six examples given in the SOC 1 standard for accomplishing this requirement are:

  • Reviewing and reconciling output reports;
  • Holding periodic discussions with the subservice organization
  • Making regular site visits to the subservice organization
  • Testing controls at the subservice organization by members of the service organization’s internal audit function
  • Reviewing Type I or Type II reports on the subservice organization’s system
  • Monitoring external communications, such as customer complaints relevant to the services provided by the subservice organization

How to Make the Shift to the New SOC 1 Audit?

When shifting to the SOC 1 audit standard, all organizations must first perform a formal risk assessment. KirkpatrickPrice helps companies accomplish this by offering our specialized resources to facilitate the assessment for them. We offer many resources dealing with risk assessment and tools to help you begin documenting on your own.

Next, organizations should assess their vendor compliance management. When managing vendors, you must define the risks your vendors pose to your organization and the services you rely on them to provide.

Is there anything going on in their environment that would cause you to be non-compliant? KirkpatrickPrice’s Online Audit Manager is a great tool that service organizations are using to manage and monitor vendor compliance.

If you have any questions regarding the changes to SOC 1, contact us today.

More SOC 1 Resources

Understanding Your SOC 1 Report Video Series 

SOC 1 Compliance Checklist: Are You Prepared for an Audit? 

How to Read Your Vendors SOC 1 or SOC 2 Report? 

To protect the security of cardholder data, the PCI Security Standards Council requires organizations that work with payment cards to maintain compliance with the PCI DSS. If you’re an entity that stores, processes, or transmits cardholder data, it’s imperative to regularly conduct a PCI audit to ensure compliance.

Below, we will define common PCI requirements and discuss the seven steps of conducting a PCI audit.

What Is a PCI Audit?

A PCI audit is a rigorous examination of the Payment Card Industry Data Security Standard, which consists of nearly 400 individual controls and is a critical part of staying in business for any merchant, service provider, or subservice provider who is involved in handling cardholder data.

At KirkpatrickPrice, our PCI audit program takes a seven-step approach to help your organization gain PCI compliance.

Beginner's Guide to PCI Compliance

Starting a PCI audit is overwhelming.

Our Beginner’s Guide to PCI Compliance will prepare you to complete your audit successfully.

You know you need a PCI audit, but don’t know what to expect or how to get started. This guide will prepare you for what your auditors are looking for and how to confidently begin your PCI compliance journey.

Get the Guide

The 7 Steps of a PCI Audit

1. Gap Analysis

How do you conduct a PCI compliance internal audit? Before beginning a PCI audit for the first time, we recommend conducting a gap analysis.

A gap analysis helps identify any administrative, physical, and technical gaps in your information security program, specifically, how you handle cardholder data. Going through a gap analysis allows our senior-level QSAs to understand your business and your level of readiness for a PCI audit. The gap analysis is an important step towards PCI compliance because your QSA can create remediation strategies that will guide you through the PCI audit process and towards compliance. Next, your organization will move on to remediate the findings found during the gap analysis.

Learn more about what to look for in a QSA before beginning any PCI audit.

2. Remediation

At this point, you may have detected some areas of non-compliance. Remediation will help your organization recognize its gaps and remediate those areas for a smoother path toward PCI compliance.

Now that your organization understands its administrative, physical, and technical gaps, a QSA from KirkpatrickPrice will work to develop a detailed remediation plan with findings from the gap analysis and recommendations on proper ways to mitigate areas of non-compliance.

3. Scoping and Planning

After weeks of remediation work, it’s time to start the PCI audit by verifying the scope of the engagement. We will work with your organization to analyze your services, geographic locations, payment applications, third parties, and other system factors to develop an accurate scope for the PCI audit.

This stage prepares the entire engagement team to move to the next step of gathering information. One helpful tip: The narrower the scope, the more accurate and efficient your PCI audit process will be, so we aim for a detailed and defined scope.

4. Gathering

At KirkpatrickPrice, we will collect your policies, procedures, and other documentation needed for your PCI audit through the Online Audit Manager.

Alongside your designated Audit Support Professional and QSA, you will begin answering questions and describing systems relating to your organization’s internal controls. The Online Audit Manager provides a platform that streamlines the PCI audit process and aids you in completing 80% of the PCI audit before one of our senior-level QSAs even visits your office for an onsite visit.

Gathering and preparing data beforehand gives you the opportunity to be more effective with time and communication during your onsite visit.

5. Onsite Visit

An onsite visit is probably what you envision when thinking about a stereotypical audit. Onsite visits are important for testing internal controls and observing your people and technology in action. During the onsite visit, a senior-level QSA, who has been partnered with you throughout the PCI audit process, will observe and test your organization to determine if your processes meet the 12 requirements of PCI compliance.

6. Report Delivery

Next, you will receive a Report on Compliance (RoC), which provides you with a detailed report on the results from your PCI audit. To generate RoCs, KirkpatrickPrice has a team of Professional Writers, who are trained and knowledgeable about the PCI DSS, that write high quality reports.

Your report will also go through our Quality Assurance processes to ensure it meets our quality standards. You can take a deep breath knowing your PCI audit was performed by a QSA and a firm that is committed to your organization’s compliance success!

7. Get on the List

We know the ultimate goal of completing a PCI audit is getting on the Visa Compliance List to give your clients an added level of assurance. By completing all the steps of your PCI DSS audit with a qualified auditing firm, you’ll receive a report to help you get on the list.

How to Market Your PCI Compliance

Going through the PCI compliance internal audit process can do more than assure your clients that their sensitive data is protected; PCI compliance can also be a powerful tool for your sales and marketing team.

How do you take your PCI compliance and market it to prospects and clients?

When you work with KirkpatrickPrice, you will receive a complimentary press kit that includes compliance logos, the writing and distribution of a press release announcing your recent PCI compliance, copy to use in marketing materials, and advice on how to best your market PCI compliance achievements.

Ready to Start Your PCI Compliance Audit?

We understand that PCI compliance can feel overwhelming. That’s why it’s so important to work with a qualified firm you can trust. At KirkpatrickPrice, we want to partner with you from audit readiness to final report.

Are you ready to work with a qualified QSA firm that partners with you throughout the PCI audit process? Connect with one of our experts today!

More PCI DSS Compliance Resources

Beginner’s Guide to PCI Compliance

PCI Demystified

What is a PCI audit?

Information security in the cloud depends on properly managing secrets, including AWS access keys. Authorized users and code must authenticate to use cloud resources. Authentication relies on shared secrets, but shared credentials may create security vulnerabilities, especially when shared naively by embedding them in application code. 

Embedding AWS access keys in code seems an efficient solution when, for example, your code needs to interact with the S3 API to store data in a bucket. However, it exposes the keys to anyone who sees the code.

AWS keys are often exposed in this way when code is uploaded to version control services like GitHub. However, publicly exposed code isn’t the only vulnerability to embedded access keys. Anyone inside the company with code access can view credentials they may not be authorized to use, undermining authentication and access control strategies.

Like giving out copies of your house key or leaving a spare under the mat, using AWS access keys in your code might seem handy, but it’s risky. If your code gets shared online, it’s like telling everyone where that spare key is. And even at work, not everyone should have a key to every door.

Below, we explore secure alternatives to embedding AWS access keys and other secrets in code.

What is an AWS Access Key?

Access keys are AWS’s primary long-term credential for programmatic authentication.  An AWS access key consists of an access key ID and a secret access key; together, they authenticate requests to AWS APIs, allowing users to interact with AWS services from their code, including via AWS CLI clients and SDKs. 

AWS access keys are associated with users in the AWS Identity and Access Management (IAM) platform. Because they are the programmatic equivalent of a username and password, they should be protected with the same diligence. Just as you wouldn’t embed your password in code, you should not embed your access key. 

How to Manage AWS Access Keys Securely

We’ll look at two ways to manage AWS access keys securely. The first is to avoid using them altogether, instead using temporary security credentials associated with AWS roles. The second takes advantage of AWS features to use access keys without exposing them needlessly.

Before discussing secure key management, a word of warning about the root users’ access key: the IAM root user has unconstrained access to every AWS resource. A bad actor may shut down servers, delete data, create and destroy users, or any other AWS API capability with the root user’s key.

For this reason, you should not use the root access key, and you should disable root user access keys already in use. In fact, it is good practice to avoid using the root account unless it’s strictly necessary, as we discussed in 10 Top Tips For Better AWS Security Today.

IAM Roles vs. IAM Users

An IAM role is an AWS identity with a set of permissions for making requests to AWS resources, but, unlike AWS users, roles are not associated with an individual. Users and applications can “assume” an IAM role, which allows them to take on the role’s permissions. Essentially, roles enable AWS customers to delegate permissions to other entities.

Roles have a couple of major advantages. First, a role can be attached to entities such as EC2 instances. That means the EC2 instance can request resources in line with the role’s permissions, obviating the need to embed an IAM user’s AWS access key in the code.

Second, roles can be used to create temporary credentials. IAM access keys are permanent until they are deleted, whereas a role’s temporary credentials automatically become invalid once a configurable time has elapsed.

Secure Use of AWS Access Keys

In some cases, you may prefer to use an IAM user’s access key instead of an AWS role, but you should not embed credentials in the code. Instead, you can safely store the access key in a location your code can read.

One option is to create an environment variable within your code’s operating environment to store the key. Environment variables are managed by the environment’s operating system and can be accessed via system libraries or the AWS SDK for your preferred programming language. Several Amazon services can use AWS Secrets Manager to retrieve secrets to inject into the environment variables of containers and other resources.

Another option is the AWS credentials file. The credentials file is a text file containing an access key. AWS SDKs and the AWS CLI will look for a credentials file and use the access key when making requests for other resources.

These methods—roles, environment variables, and credential files—are appropriate for different scenarios, but the critical point is this: embedding the AWS access key into your code is a bad idea.

How to Rotate AWS Access Keys

Rotation replaces an old key with a new key and retires the old key. AWS access keys are long-lasting credentials. If exposed, they may be exploited until the user or key is deleted. Key rotation limits the usefulness of leaked keys to bad actors.

AWS users can rotate keys in IAM without interrupting their software’s access to resources. The preferred approach is to create a new access key, update software to use the new key, and then make the old key inactive.

Once the user is satisfied all software is using the new key, they can delete the original.  AWS access key rotation can be carried out in the IAM web console, the AWS CLI, and the AWS API. 

Mitigating Risk When AWS Access Keys are Exposed

While AWS users can prevent the exposure of AWS keys, what should they do if a key is exposed? First, you must immediately invalidate the key. However, doing so will also prevent legitimate use, which could result in service disruption. Leaked keys should be invalidated as soon as possible, but you may want to rotate mission-critical software keys first. 

The exposed key may already have been used, so you must also check all resources the key grants access to. Depending on the user’s access permissions, their key may have allowed a bad actor to exfiltrate sensitive data or infiltrate malicious software. 

Finally, use S3 logs and AWS CloudTrail to investigate whether the key was exploited and take action to mitigate potential risks and vulnerabilities. 

Securely Storing other Secrets with AWS Secrets Manager

You may need to securely manage other secrets in addition to AWS access keys, including SSH keys, database credentials, and third-party API keys. AWS Secrets Manager provides a solution for storing, rotating, managing, and retrieving a wide variety of secrets. 

For example, to give an application access to a database, you would store database credentials encrypted in AWS Secrets Manager. The application can query Secrets Manager, which will decrypt and return the database credentials over an encrypted connection. Access to data stored in AWS Secrets Manager is controlled by IAM permissions policies for users, groups, and roles, providing fine-grained access control. 

Partner with an Expert to Strengthen Your Cloud Security

To learn more about AWS cloud security, visit KirkpatrickPrice’s AWS Security Services to find a wealth of cloud security and AWS audit educational content.

If you would like to discuss AWS audits with an experienced auditor, contact KirkpatrickPrice today.