Undisclosed OpenSSL vulnerability: Free scripts for target scoping

author_profile
Jonathan Rau
Monday, Oct 31st, 2022

Tomorrow is “patch Tuesday” and it's a notable one. The OpenSSL project team announced last week that they will be releasing OpenSSL version 3.0.7, with a patch to fix a critical security vulnerability. Until the vulnerability details are released, not much is known about the nature of the vulnerability. In 2014, Heartbleed was an extremely serious vulnerability that was in the codebase for nearly 2 years going back to 2012, to this day that vulnerability remains unpatched in many environments. In 2016, OpenSSL had several other vulnerabilities and security bulletins ranging from egregious POODLE attacks against the AES-NI CBC MAC check to less serious buffer overflows. 

 While many are worried about what could potentially happen because of this disclosure this post answers the call to action of finding what resources are potentially impacted by this upcoming OpenSSL v3.x vulnerability. In this post you will be provided with several Python scripts for attempting to find resources with OpenSSL installed in them either directly or as an upstream dependency as part of their software supply chain. Hopefully these will assist Panoptica customers and other security professionals in the broader community. 

Locating OpenSSL: Microsoft Defender for Endpoint (MDE) 

Microsoft Defender for Endpoint (MDE), formerly known as Microsoft Defender Advanced Threat Protection (ATP), is an Extended Detection & Response (XDR) agent within the Microsoft Defender ecosystem that provides Threat & Vulnerability Management (TVM), Endpoint Detection and Response (EDR), and other security capability sets to users of the agent. MDE can provide XDR coverage of varying maturity to Linux, Windows, and MacOS devices within your environment – including cloud-based instances. 

Important Note: The following script relies on an Enterprise Application (EA) being created with necessary permissions to access various Defender APIs and your tenant ID, the EA client ID, and EA secret ID being stored in encrypted AWS Systems Manager (SSM) Parameter Store parameters. Refer to this Blog for more information on this setup. 

This script will retrieve credentials from AWS SSM Parameter Store, authenticate to the Microsoft Security Graph, retrieve every single active MDE machine, and then retrieve the installed software. Only machines with OpenSSL installed will be written to a JSON document along with basic metadata such as the computer name and IP addresses identified by MDE. The installed software API does not provide any version numbers or Common Platform Enumeration (CPE) identifiers. 

import boto3
import os
import requests
import json
import re

tenantIdParam = os.environ['AZURE_APP_TENANT_ID_PARAM']
clientIdParam = os.environ['AZURE_APP_CLIENT_ID_PARAM']
secretIdParam = os.environ['AZURE_APP_SECRET_ID_PARAM']

def get_token():
    ssm = boto3.client('ssm')

    tenantId = ssm.get_parameter(Name=tenantIdParam,WithDecryption=True)['Parameter']['Value']
    clientId = ssm.get_parameter(Name=clientIdParam,WithDecryption=True)['Parameter']['Value']
    secretId = ssm.get_parameter(Name=secretIdParam,WithDecryption=True)['Parameter']['Value']

    tokenUrl = f'https://login.microsoftonline.com/{tenantId}/oauth2/token'
    resourceAppIdUri = 'https://api.securitycenter.microsoft.com'

    #tokenHeaders = { 'Content-Type': 'application/x-www-form-urlencoded' }

    data = {
        'grant_type': 'client_credentials',
        'client_id': clientId,
        'resource' : resourceAppIdUri,
        'client_secret': secretId
    }

    r = requests.post(
        tokenUrl,
        data=data
    )

    token = r.json()['access_token']

    print('SSM Parameters processed and OAuth Token created')

    del data
    del ssm

    return token

def get_machines():
    # Retrieve OAuth token for Bearer AuthN
    token = get_token()
    # Create empty list to hold MDE Machine data
    mdeMachines = []
    # Set filename for upload
    fileName = 'processed_machines'

    headers = {'Authorization': f'Bearer {token}'}

    # Retrieve all Machines
    r = requests.get(
        'https://api-us.securitycenter.microsoft.com/api/machines',
        headers=headers
    )
    # As we loop through Machine data from MDE, we want to pull out only AWS EC2 Instances which should be tagged with the Instance ID
    # provided you set up properly...
    for v in r.json()['value']:
        # drop the IP address details
        del v['ipAddresses']
        # Skip "Inactive" Machines
        if str(v['healthStatus']) == 'Inactive':
            continue
        mdeMachines.append(v)

    get_machine_vulns(mde_machines=mdeMachines)

def get_machine_vulns(mde_machines):
    '''
    This function receives a parsed list of MDE Machines and uses them to retrieve vulnerabilities.
    Uploads JSON files and associated QuickSight Manifests to S3 as well.
    '''
    # Retrieve OAuth token for Bearer AuthN
    token = get_token()
    headers = headers = {'Authorization': f'Bearer {token}'}

    # OpenSSL regex...
    opensslRegex = re.compile('openssl')

    # Create an empty list to house all of the machine vulnerabilities
    openSslMachines = []

    print('Finding MDE-enabled Machines with OpenSSL installed.')

    for machine in mde_machines:
        # parse the machine payload
        machineId = machine['id']
        computerDnsName = machine['computerDnsName']
        osPlatform = machine['osPlatform']
        osVersion = machine['osVersion']
        lastIpAddress = machine['lastIpAddress']
        lastExternalIpAddress = machine['lastExternalIpAddress']

        r = requests.get(
            f'https://api-us.securitycenter.microsoft.com/api/machines/{machineId}/software',
            headers=headers
        )
        # We will provide some basic shaping of the data returned for Vulnerabilities - namely around Exploit data
        try:
            for v in r.json()['value']:
                # Check for the software name - if it doesn't match OpenSSL, skip it over
                if not opensslRegex.search(v['name']):
                    continue
                else:
                    openSslDict = {
                        'MachineId': machineId,
                        'ComputerDnsName': computerDnsName,
                        'OsPlatform': osPlatform,
                        'OsVersion': osVersion,
                        'LastIpAddress': lastIpAddress,
                        'LastExternalIpAddress': lastExternalIpAddress,
                        'SoftwareId': v['id'],
                        'SoftwareName': v['name']
                    }
                    openSslMachines.append(openSslDict)
        except KeyError as ke:
            print(ke)
            continue

    with open('./mde_openssl_machines.json', 'w') as jsonfile:
        json.dump(
            openSslMachines,
            jsonfile,
            indent=4,
            default=str
        )

    print('Complete')

get_machines()

The expected output from this script will resemble this example:

{

    "MachineId": "29161338f0EXAMPLEEXAMPLE6186c85b3bad85c8",

    "ComputerDnsName": "ip-10-EXAMPLE-1-37.us-east-7.compute.internal",

    "OsPlatform": "Ubuntu",

    "OsVersion": null,

    "LastIpAddress": "10.EXAMPLE.1.37",

    "LastExternalIpAddress": "3.EXAMPLE.54.108",

    "SoftwareId": "ubuntu-_-python3-openssl_for_linux",

    "SoftwareName": "python3-openssl_for_linux"

}

Locating OpenSSL: AWS Systems Manager Inventory 

Within the AWS Cloud, the AWS Systems Manager (SSM) service is a management and governance service which combines several configuration management utilities such as patch management, package management, and software asset management into a suite of easy-to-use tools and APIs. Within the SSM ecosystem is Fleet Manager which is a sub-service that collates EC2 instances and on-premises servers managed by SSM Hybrid Activations along with their patch status, compliance status (e.g., from tools such as Chef InSpec), and their inventory. 

Using the ListInventoryEntries API from SSM, you can retrieve specific packages using filtering operators such as “BeginWith” to attempt to find installed packages. This method requires your EC2 instance to be onboarded to AWS SSM and have a correctly configured Inventory collection state management association. Additionally, this does not account for edge-cases where an installed package is not named OpenSSL but relies on it in an upstream fashion. 

The following script will locate all your opted-in AWS Regions for your Account and attempt to find EC2 instances, determined if they are onboarded by SSM, and then attempts to write any OpenSSL package into a JSON file. 

from botocore.config import Config
import boto3
import json

# Boto3 Client Configuration for retries. AWS Defaults to 4 Max Attempts in "Normal Mode"
# Adaptive is a beta feature that will attempt exponential backoffs and different headers
# to avoid being throttled
config = Config(
   retries = {
      'max_attempts': 10,
      'mode': 'adaptive'
   }
)

def get_opted_in_aws_regions():
    ec2 = boto3.client('ec2')

    print('Getting all AWS Regions')
            
    # create empty list for all opted-in Regions
    regionList = []

    try:
        # Get all Regions we are opted in for
        for r in ec2.describe_regions()['Regions']:
            regionName = str(r['RegionName'])
            optInStatus = str(r['OptInStatus'])
            if optInStatus == 'not-opted-in':
                pass
            else:
                regionList.append(regionName)
        
        print('All Regions retrieved from EC2 service')
    except Exception as e:
        raise e
        
    print('Got all AWS Regions')

    return regionList

def get_ec2_instances():
    # List to hold Instances with openssl
    openSslInstances = []
    # Loop all Regions
    for region in get_opted_in_aws_regions():
        session = boto3.Session(region_name=region)
        ec2 = session.client('ec2', config=config)
        ssm = session.client('ssm', config=config)

        paginator = ec2.get_paginator('describe_instances')
        iterator = paginator.paginate(
            Filters=[
                {
                    'Name': 'instance-state-name',
                    'Values': [
                        'running',
                        'stopped'
                    ]
                }
            ]
        )
        for page in iterator:
            for r in page['Reservations']:
                for i in r['Instances']:
                    # First, determine if the Instance belongs to a Spot Fleet, 
                    # if it does we do not want/need to collect it due to coverage discrepancies
                    try:
                        iLifecycle = i['SpotInstanceRequestId']
                    except KeyError:
                        iLifecycle = 'NotSpot'
                    if iLifecycle != 'NotSpot':
                        continue
                    else:
                        # Check if an instance is onboarded by SSM
                        instanceId = str(i['InstanceId'])
                        dii = ssm.describe_instance_information(
                            Filters=[
                                {
                                    'Key': 'InstanceIds',
                                    'Values': [instanceId]
                                }
                            ]
                        )
                        if not dii['InstanceInformationList']:
                            print(f'Instance {instanceId} in Region {region} is not managed by SSM.')
                        else:
                            for openssl in  ssm.list_inventory_entries(
                                InstanceId=instanceId,
                                TypeName='AWS:Application',
                                Filters=[
                                    {
                                        'Key': 'Name',
                                        'Values': [
                                            'openssl'
                                        ],
                                        'Type': 'BeginWith'
                                    }
                                ]
                            )['Entries']:
                                openSslDict = {
                                    'InstanceId': instanceId,
                                    'AwsRegion': region,
                                    'ApplicationName': openssl['Name'],
                                    'PackageId': openssl['PackageId'],
                                    'ApplicationVersion': openssl['Version']
                                }
                                openSslInstances.append(openSslDict)

    with open('./aws_ec2_openssl_machines.json', 'w') as jsonfile:
        json.dump(
            openSslInstances,
            jsonfile,
            indent=4,
            default=str
        )

    print('Complete')

get_ec2_instances()

The expected output from this script will resemble this example:

{

    "InstanceId": "i-02f6EXAMPLE98f5c0c ",

    "AwsRegion": "us-east-7",

    "ApplicationName": "openssl-libs",

    "PackageId": "openssl-1.0.2k-24.amzn2.0.4.src.rpm",

    "ApplicationVersion": "1.0.2k"

},

{

    "InstanceId": "i-02f6EXAMPLE98f5c0c",

    "AwsRegion": "us-east-7",

    "ApplicationName": "openssl",

    "PackageId": "openssl-1.0.2k-24.amzn2.0.4.src.rpm",

    "ApplicationVersion": "1.0.2k"

}

Locating OpenSSL: Anchore Grype & Syft 

This section deals with two related tools by Anchore, Grype and Syft, both of which are command-line utilities for Docker images and filesystems. Both work with several “flavors” of Linux such as Alpine, BusyBox, Amazon Linux, Ubuntu, and RHEL as well as several language-specific package managers such as Ruby Gems, NPM, Yarn, Wheel, Poetry, and their derivative files such as JAR and requirements.txt files.  

Grype is a vulnerability management tool that works on containers and filesystems so it can be used on your cloud-based instances as well as any Docker images which you build. Syft is a tool to generate Software Bill of Materials (SBOM) files in JSON, CycloneDX, and SPDX formats (among others) which can also be absorbed into Grype for faster vulnerability management built on top of your SBOM creation process. SBOMs have gained popularity within the wider cybersecurity community due to fiat decrees (Execute Orders) issued by the United States Government. 

While vulnerability management tools and SBOMs in their basest form will retrieve CPEs and other package information from the operating system and package management tools – how they use this information differs widely. For the most part a vulnerability management tool takes only the packages that match against any number of vulnerability information databases such as the NIST National Vulnerability Database (NVD) and retrieve information about the specific vulnerability such as the severity and description. 

An SBOM on the other hand is analogous to an ingredients label for your food, however it goes into more detail, imagine if the ingredients also came with individual provenance information such as their upstream ingredient, the factory it was produced in, how it got to where you purchased it, and other supply chain information. That is more akin to what an SBOM is, identifying the direct installed packages within your system as well as the upstream packages in which those other packages are dependent on – giving you a full picture of transient and deeply nested dependencies. Orienting your vulnerability management processes on SBOMs instead of directly on filesystems and package management tools will help solve for third party risk use cases as well as typical risk treatment efforts by your broader threat and vulnerability management program. 

Before using this script install both tools (this requires administrative permissions on your OS) using the command line and run a JSON report for both tools against public Docker images. Do not run the last two commands until you create the provided script. 

# Install Grype & Syft 
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sudo sh -s -- -b /usr/local/bin 
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sudo sh -s -- -b /usr/local/bin 
# Run each tool against some public containers 
grype python:latest --scope all-layers -o json --file ./python_latest_docker_grype.json 
syft ubuntu:latest --scope all-layers -o json --file ./ubuntu_latest_docker_syft.json 
# Use the script 
python3 ParseGrypeOpenSsl.py --scanner_source Grype --filename python_latest_docker_grype.json 
python3 ParseGrypeOpenSsl.py --scanner_source Syft  --filename ubuntu_latest_docker_syft.json 

The ParseGrypeOpenSsl.py script referenced in the above Bash script is as follows. 

import argparse
import re
import json

def parse_grype_json_output(filename):
    # Parsed output
    openSslGrype = []
    # OpenSSL Regex
    opensslRegex = re.compile('openssl')
    # Open the JSON file
    with open(f'./{filename}', 'r') as jsonfile:
        parsed = json.load(jsonfile)
        # Parse Source image info
        sourceType = parsed['source']['type']
        sourceTargetName = parsed['source']['target']['userInput']
        sourceTargetId = parsed['source']['target']['imageID']
        # Parse CPE matches
        for x in parsed['matches']:
            artifact = x['artifact']
            if not (
                opensslRegex.search(artifact['name']) or 
                opensslRegex.search(artifact['purl'])
                ):
                continue
            else:
                openSslDict = {
                    'SourceType': sourceType,
                    'SourceTargetName': sourceTargetName,
                    'SourceTargetId': sourceTargetId,
                    'ArtifactName': artifact['name'],
                    'ArtifactVersion': artifact['version'],
                    'ArtifactPurl': artifact['purl'],
                    'ArtifactLocations': artifact['locations'],
                    'ArtifactUpstreams': artifact['upstreams']
                }
                openSslGrype.append(openSslDict)

    with open('./parsed_grype_openssl.json', 'w') as jsonfile:
        json.dump(
            openSslGrype,
            jsonfile,
            indent=4,
            default=str
        )

    print('Complete')

def parse_syft_json_output(filename):
    # Parsed output
    openSslSyft = []
    # OpenSSL Regex
    opensslRegex = re.compile('openssl')
    # Open the JSON file
    with open(f'./{filename}', 'r') as jsonfile:
        parsed = json.load(jsonfile)
        # Parse Source image info
        sourceType = parsed['source']['type']
        sourceTargetName = parsed['source']['target']['userInput']
        sourceTargetId = parsed['source']['target']['imageID']
        # Parse CPE matches
        for artifact in parsed['artifacts']:
            if not (
                opensslRegex.search(artifact['name']) or 
                opensslRegex.search(artifact['purl']) or
                opensslRegex.search(artifact['metadata']['source'])
                ):
                continue
            else:
                openSslDict = {
                    'SourceType': sourceType,
                    'SourceTargetName': sourceTargetName,
                    'SourceTargetId': sourceTargetId,
                    'ArtifactName': artifact['name'],
                    'ArtifactVersion': artifact['version'],
                    'ArtifactPurl': artifact['purl'],
                    'ArtifactMetadataSource': artifact['metadata']['source']
                }
                openSslSyft.append(openSslDict)

    with open('./parsed_syft_openssl.json', 'w') as jsonfile:
        json.dump(
            openSslSyft,
            jsonfile,
            indent=4,
            default=str
        )

    print('Complete')

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    # --scanner_source
    parser.add_argument(
        '--scanner_source',
        help='Specify which Anchore product you want to parse. Must be in JSON output format only.',
        required=True,
        choices=['Grype', 'Syft']
    )
    # --filename
    parser.add_argument(
        '--filename',
        help='Filename that corresponds to the JSON output of a scanner',
        required=True
    )

    args = parser.parse_args()
    if args.scanner_source == 'Grype':
        parse_grype_json_output(
            filename=args.filename
        )
    elif args.scanner_source == 'Syft':
        parse_syft_json_output(
            filename=args.filename
        )
    else:
        raise

The expected output for Grype from this script will resemble this example:

{ 
    "SourceType": "image", 
    "SourceTargetName": "python:latest", 
    "SourceTargetId": "sha256:00cd1fb8bdcc67527e569dcdf5e4ad9d704b117eb961602804826281d641cac3", 
    "ArtifactName": "libssl-dev", 
    "ArtifactVersion": "1.1.1n-0+deb11u3", 
    "ArtifactPurl": "pkg:deb/debian/libssl-dev@1.1.1n-0+deb11u3?arch=amd64&upstream=openssl&distro=debian-11", 
    "ArtifactLocations": [ 
        { 
            "path": "/usr/share/doc/libssl-dev/copyright", 
            "layerID": "sha256:882fd36bfd35d8c0c12d8472686059e1a6943c23a1e12ff9c18bceec3027e47c" 
        }, 
        { 
            "path": "/var/lib/dpkg/info/libssl-dev:amd64.md5sums", 
            "layerID": "sha256:882fd36bfd35d8c0c12d8472686059e1a6943c23a1e12ff9c18bceec3027e47c" 
        }, 
        { 
            "path": "/var/lib/dpkg/status", 
            "layerID": "sha256:6b183c62e3d75c58f15d76cc6b6bedadab02270bff6d05ed239c763a63dce306" 
        }, 
        { 
            "path": "/var/lib/dpkg/status", 
            "layerID": "sha256:882fd36bfd35d8c0c12d8472686059e1a6943c23a1e12ff9c18bceec3027e47c" 
        } 
    ], 
    "ArtifactUpstreams": [ 
        { 
            "name": "openssl" 
        } 
    ] 
} 

The expected output for Syft from this script will resemble this example:

[

    {

        "SourceType": "image",

        "SourceTargetName": "ubuntu:latest",

        "SourceTargetId": "sha256:cdb68b455a141ed921945f6d39a8c0694a7e21a37b2b030488d73e38875a26cc",

        "ArtifactName": "libssl3",

        "ArtifactVersion": "3.0.2-0ubuntu1.6",

        "ArtifactPurl": "pkg:deb/ubuntu/libssl3@3.0.2-0ubuntu1.6?arch=amd64&upstream=openssl&distro=ubuntu-22.04",

        "ArtifactMetadataSource": "openssl"

    }

]

How Panoptica customers can discover any vulnerable OpenSSL endpoints 

Due to the fact that the CVE has not been released yet, it does not have a number, therefore Panoptica will show this vulnerable asset as having the CVE - ‘CVE-OPENSSL-UNDISCLOSED’. 

Starting from the next scan of each account, please follow these steps to identify all your vulnerable assets: 

  • Log in to the Panoptica platform. 
  • Go to the ‘Vulnerabilities’ page. 
  • Search for ‘CVE-OPENSSL-UNDISCLOSED’. 
  • If there are vulnerable assets, click on the result row. 
  • A side panel opens with the full Details and the Assets list on the second tab. 
  • Scroll down and click on ‘Show in Assets List’ to see the detailed list. 

The Panoptica Attack Path Prioritization Engine will prioritize those assets that are vulnerable which are public and/or noted that they contain sensitive data.  

For any readers not currently a Panoptica customer, anyone can start for free and get an immediate OpenSSL vulnerability assessment and prioritization of your cloud environment.  

Security is better when we work together 

Whenever a critical vulnerability on a popular piece of software is released, we get encouraged and inspired by how our cloud defenders, blue teamers, PSIRT engineers and many more come together to patch the gap. We’re proud to be a part of this community and hope some of these scripts and recommendations will help.  

Popup Image