“git push” keeps asking password [Deprecating password authentication]
Today morning while pushing the changes to the github repository, I was constantly asked to provide the username and password at the console.
Despite passing the correct username and password, I was not able to push the changes and rather message below

It was working fine till yesterday night and hence was wondering what could have caused the issue. I started looking into the access token in the Github and saw that the one I was using till last night was expired and hence seeing the issue.
Password-based authentication for Git is deprecated (as evident from the screenshot above as well) and you should make the push based on the token authentication. GitHub has personal access token (PAT), to use in place of a password with the command line or with the API. Below is how to generate the token and use it.
Create a token in Github
- Login to GitHub and navigate to ‘Settings’ from within your profile

- Click on Developer Settings. This will take you to GitHub Apps page. Navigate to “Personal access token” section.

- Click on Personal Access Token –> Token (classic).

- You will find all your tokens (active or expired). Here I found my token to be expired today and hence I was able to run the”git push” command successfully till last night.

- Click on ‘Generate new token’

- Make sure you note down the ‘Personal Access token’. It won’t be shown again and if you didn’t note it down, there is no way but to create a new one.
- Use the newly created token to use the ‘git push’ command.
$ git push https://<Personal-Access-Token>@github.com/<Your-Github-UserName>/<Name-of-your-repository>

Hope this helps. Happy reading.
~Anand M
Boto3 Script to create and attach an EBS Volume to an EC2
import boto3 import logging import datetime import argparse import time from datetime import datetime from botocore.exceptions import ClientError logger = logging.getLogger() logger.setLevel(logging.INFO) VolumeList=[] expirationDate = expiration_Date = "" DEFAULT_AWS_Account_ID = "1111222222" DEFAULT_REGION = "us-east-1" def parse_commandline_arguments(): global REGION global AWS_Account_ID global instance_id global server_name global size_of_volume global kms_key parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter, description='Boto 2 Scritp to create and attach volume to a given Ec2 Instance.') parser.add_argument("-accountID", "--ownerID", dest="aws_ID", type=str, default=DEFAULT_AWS_Account_ID, help="The AWS Account ID where volume tagging is to be done") parser.add_argument("-r", "--region", dest="region", type=str, default=DEFAULT_REGION, help="Specify the region of the AWS Account") parser.add_argument("-server_name", "--ServerName", dest="servername", type=str, help="Specify the Instance Name to be terminated") parser.add_argument("-volume_size","--Volume_Size",dest="volumesize", type=int, help="Specify the size of new volume to be created and attached") parser.add_argument("-kmsId","--KMS_ID",dest="kms_key_id",type=str, help="Specify the KMS Key ID to encrypt the volume") args = parser.parse_args() REGION = args.region AWS_Account_ID = args.aws_ID server_name = args.servername size_of_volume = args.volumesize kms_key = args.kms_key_id def ec2_client(region): """ Connects to EC2, returns a connection object """ try: conn = boto3.client('ec2', region_name=region) except Exception as e: sys.stderr.write( 'Could not connect to region: %s. Exception: %s\n' % (region, e)) conn = None return conn def wait_for_state (instance, target_state): # Waits for instance to move to desired state # Vol Creation State: 'creating'|'available'|'in-use'|'deleting'|'deleted'|'error' # Vol Attachment State: 'attaching'|'attached'|'detaching'|'detached' status = ec2.Instance(instance).state['Name'] while status != target_state: print("Waiting for Instance - {} to come in {} state" .format(instance,target_state)) time.sleep (5) status = ec2.Instance(instance).state['Name'] def create_and_attach_volume(client,serverName,volSize,kmsId): global VolumeList device = "/dev/sdh" print(serverName) # Get Instance ID from the given Instance Name filters = [ {'Name': 'tag:Name', 'Values': [serverName]} ] for attempt in range(5): try: response = client.describe_instances(Filters=filters)["Reservations"] #response = client.describe_instances() instanceid = response[0]['Instances'][0]['InstanceId'] avaialbilityZone = response[0]['Instances'][0]['Placement']['AvailabilityZone'] print(instanceid + ":" + avaialbilityZone) except BaseException as err: logger.error(err) logger.info("*** ERROR *** during EC2 Describe proceess - retry...") time.sleep(0.5) else: logger.info("--> Done") break else: logger.error("*** ERROR *** - All attempt to describe instance failed - exit with error") raise Exception("*** ERROR *** - Can't describe instance") # Create volume for attempt in range(5): try: response = client.create_volume( AvailabilityZone=avaialbilityZone, Encrypted=True, KmsKeyId=kmsId, Size=volSize, VolumeType='gp3' ## Default Volume Type ) #print(response) except BaseException as err: logger.error(err) logger.info("*** ERROR *** during EC2 Volume creation proceess - retry...") time.sleep(0.5) else: logger.info("--> Done") break else: logger.error("*** ERROR *** - All attempt to create EC2 Volume failed - exit with error") raise Exception("*** ERROR *** - Can't create EBS Volume") if response['ResponseMetadata']['HTTPStatusCode']== 200: volume_id= response['VolumeId'] print('***volume:', volume_id) client.get_waiter('volume_available').wait( VolumeIds=[volume_id] ) print('***Success!! volume:', volume_id, 'created...') VolumeList.append(volume_id) print(VolumeList) # Add tag on newly created Volumes logger.info("Tagging for deletion following Volumes:") for volume in VolumeToDelList: logger.info("- " + volume) for attempt in range(5): try: print("creating Tag for Volume ID {}" .format(VolumeToDelList)) client.create_tags( Resources=VolumeToDelList, Tags=[ { 'Key': 'InsntanceId', 'Value': instanceid } ] ) except BaseException as err: logger.error(err) logger.error("*** ERROR *** during tagging Volumes - retry...") time.sleep(0.6) else: logger.info("--> Done") break else: logger.error("*** ERROR *** - All attempt to tagging volumes - exit with error") raise Exception("*** ERROR *** - Can't tagging Volumes") # Attach Volume to EC2 Instance logger.info("--> Attaching volume to EC2") for attempt in range(5): try: if volume_id: print('***attaching volume:', volume_id, 'to:', instanceid) response = client.attach_volume( Device=device, InstanceId=instanceid, VolumeId=volume_id, DryRun=False ) if response['ResponseMetadata']['HTTPStatusCode']== 200: client.get_waiter('volume_in_use').wait( VolumeIds=[volume_id], DryRun=False ) print('***Success!! volume:', volume_id, 'is attached to instance:', instanceid) except BaseException as err: logger.error(err) logger.error("*** ERROR *** during EC2 Volume attachment process - retry...") time.sleep(0.6) # second else: logger.info("--> Done") break else: logger.error("*** ERROR *** - All attempt to attach volume to instance failed - exit with error") raise Exception("*** ERROR *** - Can't attach volume to EC2") if __name__ == '__main__': try: parse_commandline_arguments() client=ec2_client(REGION) create_and_attach_volume(client,server_name,size_of_volume,kms_key) except Exception as error: logging.error(error) print(str(error))
Happy Reading !!!!
-Anand M
Boto3 script to delete existing VPC Interface Endpoints from a given AWS Account
Recently developed a script using Boto3 and Python to delete specific VPC Interface Endpoints. These endpoints were deployed as part of landing zone resources but are not being used currently. Such resources incur cost and hence if not used, it is good to remove them to save some cost.
Intent is to call this script from some DevOps tool (like Ansible or Jenkins) to complete automate the task.
#!/usr/bin/env python import boto3 import logging import os.path import time import argparse output_dir = "/tmp" DEFAULT_AWS_Account_ID = "1111222222" DEFAULT_REGION = "us-east-1" client = boto3.client('ec2') logger = logging.getLogger() logger.setLevel(logging.INFO) # create console handler and set level to info handler = logging.StreamHandler() handler.setLevel(logging.INFO) logger.addHandler(handler) # create file handler and set level to Info # this is to help Output directed to both - console # and file handler = logging.FileHandler(os.path.join(output_dir, "vpcendpointdelete.log"),"w", encoding=None, delay="true") handler.setLevel(logging.INFO) logger.addHandler(handler) def parse_commandline_arguments(): global REGION global AWS_Account_ID parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter, description='Boto3 script to delete VPC Interface Endppints from a given AWS Account.') parser.add_argument("-accountID", "--ownerID", dest="aws_ID", type=str, default=DEFAULT_AWS_Account_ID, help="The AWS Account ID where VPC Endpoint is to be deleted") parser.add_argument("-r", "--region", dest="region", type=str, default=DEFAULT_REGION, help="Specify the region of the AWS Account") args = parser.parse_args() REGION = args.region AWS_Account_ID = args.aws_ID def remove_vpcendpoint(region): if region == "us-east-2": filters = [{'Name': 'service-name', 'Values': ['com.amazonaws.us-east-2.ec2','com.amazonaws.us-east-2.ec2messages','com.amazonaws.us-east-2.ssm','com.amazonaws.us-east-2.ssmmessages','com.amazonaws.us-east-2.monitoring'] }, {'Name': 'vpc-endpoint-type', 'Values' : ['Interface']}] if region == "us-east-1": filters = [{'Name': 'service-name', 'Values': ['com.amazonaws.us-east-1.ec2','com.amazonaws.us-east-1.ec2messages','com.amazonaws.us-east-1.ssm','com.amazonaws.us-east-1.ssmmessages','com.amazonaws.us-east-1.monitoring'] }, {'Name': 'vpc-endpoint-type', 'Values' : ['Interface']}] response = client.describe_vpc_endpoints(Filters=filters) for services in response['VpcEndpoints']: logger.info("Deleting VpcEndpoint ID : {} - Service Name : {}".format(services['VpcEndpointId'],services['ServiceName'])) for attempt in range(5): try: client.delete_vpc_endpoints( VpcEndpointIds=[services['VpcEndpointId']] ) except BaseException as err: logger.error(err) logger.info("*** ERROR *** during VPC Interface Endpoint delete - retry...") time.sleep(0.5) else: logger.info("--> Done") break; else: logger.error("*** ERROR *** - All attempt to delete VPC Interface Endpoint failed - exit with error") raise Exception("*** ERROR *** - Can't delete VPC Interface Endpoint") if __name__ == '__main__': try: parse_commandline_arguments() remove_vpcendpoint(REGION) except Exception as error: logging.error(error) print(str(error))
Enjoy reading !!!
Anand M
Script to Enable AWS S3 Server Access Logging using Boto3
Many times we come across a situation where S3 Bucket access logging is not default and due to corporate security policy, such buckets are flagged a Security incident. Hence there was a need to enable the sever access logging programmatically due to very large number of such S3 Buckets.
Recently I developed a script using boto3 to achieve the task. This helped to enable the logging for 100+ such buckets in ~30 min. Also, I configured a job in Jenkins so that job can be accomplished by L1 support team.
Script Name – EnableS3BucketLogging.py
#!/usr/bin/env python import boto3 import time import sys import logging import datetime import argparse import csv import os from botocore.exceptions import ClientError print ("S3 Listing at %s" % time.ctime()) DEFAULT_BUCKET = "ALL" DEFAULT_REGION = "us-east-1" DEFAULT_AWS_Account_ID = "1234567899765" DEFAULT_AWS_Account_Name = "Dummy Account Name" def parse_commandline_arguments(): global REGION global AWS_Account_ID global AWS_Account_Name global BUCKET_NAME global target_bucket parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter, description='Enable S3 Server Logging if Not enabled.') parser.add_argument("-accountID", "--ownerID", dest="aws_ID", type=str, default=DEFAULT_AWS_Account_ID, help="The AWS Account ID where volume tagging is to be done") parser.add_argument("-r", "--region", dest="region", type=str, default=DEFAULT_REGION, help="Specify the region of the AWS Account") parser.add_argument("-b", "--bucket", dest="bucket", type=str, default=DEFAULT_BUCKET, help="Specify the bucket name") parser.add_argument("-accountName","--AWSAccountName",dest="aws_account_name",type=str, default=DEFAULT_AWS_Account_Name, help="Specify the AWS Account Name") args = parser.parse_args() REGION = args.region AWS_Account_ID = args.aws_ID BUCKET_NAME = args.bucket AWS_Account_Name = args.aws_account_name def s3_resource(region): # Connects to EC2, returns a connection object try: conn = boto3.resource('s3', region_name=region) except Exception as e: sys.stderr.write( 'Could not connect to region: %s. Exception: %s\n' % (region, e)) conn = None return conn def s3_client(region): """ Connects to EC2, returns a connection object """ try: conn = boto3.client('s3', region) except Exception as e: sys.stderr.write( 'Could not connect to region: %s. Exception: %s\n' % (region, e)) conn = None return conn def grantaclBucket(s3_client,sourcebucket,targetbucket): try: acl = s3_client.get_bucket_acl(Bucket = sourcebucket) for d in acl['Grants']: if 'ID' in d['Grantee']: # If Grantee is NOT URI, then specific Grant needs to be given before enabling Logging canonical_id = d['Grantee']['ID'] response = s3.put_bucket_acl( AccessControlPolicy={ 'Grants': [ { 'Grantee': { 'Type': 'Group', 'URI': 'http://acs.amazonaws.com/groups/s3/LogDelivery' }, 'Permission': 'READ_ACP' }, { 'Grantee': { 'Type': 'Group', 'URI': 'http://acs.amazonaws.com/groups/s3/LogDelivery' }, 'Permission': 'WRITE' } ], 'Owner': { 'ID': canonical_id }, }, Bucket=targetbucket ) elif 'URI' in d['Grantee']: # If Grant is already given to URL, no need of explicit Grant print("Log Delivery Group has the required permission...") return True except Exception as error: logging.error(e) return None def enableAccessLogging(clientS3, sourcebucket, targetbucket,targetPrefix): try: response = clientS3.put_bucket_logging( Bucket=sourcebucket, BucketLoggingStatus={ 'LoggingEnabled': { 'TargetBucket': targetbucket, 'TargetPrefix': targetPrefix } }, ) return True except ClientError as e: logging.error(e) return None def showSingleBucket(bucketName,s3,s3bucket,targetPrefix): "Displays the contents of a single bucket" if ( len(bucketName) == 0 ): print ("bucket name not provided, listing all buckets....") time.sleep(8) else: print ("Bucket Name provided is: %s" % bucketName) #s3bucket = boto3.resource('s3') my_bucket = s3bucket.Bucket(bucketName) bucket_logging = s3bucket.BucketLogging(bucketName) bucket_logging_response = bucket_logging.logging_enabled if bucket_logging.logging_enabled is None: print("Bucket - {} is not loggging Enabled" .format(bucketName)) print("Bucket - {} logging is in progress..." .format(bucketName)) grantaclBucket(s3,bucketName,bucketName) # Grant ACL to Log Delivery Group - mandatory before enabling logging enableAccessLogging(s3, bucketName, bucketName,targetPrefix) # Enable Bucket Logging else: print("Bucket - {} Logging is already enabled." .format(bucketName)) print("Target Bucket is - {}" .format(bucket_logging_response['TargetBucket'])) print("Target prefix is - {}" .format(bucket_logging_response['TargetPrefix'])) #for object in my_bucket.objects.all(): # print(object.key) return def showAllBuckets(s3,s3bucket,targetPrefix): try: response = s3.list_buckets() for bucket in response['Buckets']: my_bucket = bucket['Name'] bucket_logging = s3bucket.BucketLogging(my_bucket) bucket_logging_response = bucket_logging.logging_enabled if bucket_logging.logging_enabled is None: print("Bucket - {} is not loggging Enabled" .format(my_bucket)) print("Bucket - {} logging is in progress..." .format(my_bucket)) grantaclBucket(s3,my_bucket,my_bucket) # Grant ACL to Log Delivery Group enableAccessLogging(s3,my_bucket,my_bucket,targetPrefix) # Enable Bucket Logging else: print("Bucket - {} Logging is already enabled." .format(my_bucket)) target_bucket = bucket_logging_response['TargetBucket'] target_prefix = bucket_logging_response['TargetPrefix'] except ClientError as e: print("The bucket does not exist, choose how to deal with it or raise the exception: "+e) return if __name__ == '__main__': try: parse_commandline_arguments() targetPrefix = 'S3_Access_logs/' s3_client_conn = s3_client(REGION) s3_resource_conn = s3_resource(REGION) print("<font size=1 face=verdana color=blue>Processing for AWS Account :- <b><font size=1 color=red> {}</font></b></font><br>".format(AWS_Account_ID)) print( "<font size=1 face=verdana color=blue>==============================</font><br><br>") if BUCKET_NAME == "ALL": showAllBuckets(s3_client_conn,s3_resource_conn,targetPrefix) else: showSingleBucket(BUCKET_NAME,s3_client_conn,s3_resource_conn,targetPrefix) except Exception as error: logging.error(e) print(str(error)) print("Issue while enabling Server Access Logging")
This python script is being called from Shell script – where the environment is set using “AssumeRole” funciton.
Shell Script Name – EnableS3BucketLogging.py
#!/bin/sh if [[ $# -lt 2 ]]; then echo "Usage: ${0} <AccountID> <Bucket Name>" exit 1 fi AccountID=${1} BucketName=${2} script_top=/u01/app/scripts outputdir=${script_top}/output logfile=${script_top}/logs/EnableS3BucketLogging.log cat /dev/null > ${logfile} unset AWS_SESSION_TOKEN AWS_DEFAULT_REGION AWS_SECRET_ACCESS_KEY AWS_ACCESS_KEY_ID . /u01/app/scripts/bin/AssumeRole.sh ${AccountID} # No need to set Region as Buckets are Global echo "python ${script_top}/bin/EnableS3BucketLogging.py -accountID ${AccountID} -b ${BucketName}" python ${script_top}/bin/EnableS3BucketLogging.py -accountID ${AccountID} -b ${BucketName}
Hope this helps. Happy reading !!!
~Anand M
Script to generate CSV for Compute Optimizer data from a Json file
Below is the script to generate a CSV file from a JSON output. I wrote this script for generating CSV for collecting compute optimizer data so that each EC2 has one line of data in the CSV file. Later on this CSV file is uploaded to google sheet for further analysis.
Python script “reportComputeOptData.py” is called within shell script “reportComputeOptData.sh”.
Python Script
import sys import json import pandas as pd ## Env is set for proper console display pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) ## Env Setting - Ends jsonfile = str(sys.argv[1]) csvfile = str(sys.argv[2]) with open(jsonfile) as file: data = json.load(file) df = pd.DataFrame(data['instanceRecommendations']) for i,item in enumerate(df['utilizationMetrics']): for k in range(len(df['utilizationMetrics'][i])): #Add a new column with a default value and then add/update the value of that colm df.at[i,'utilizationMetrics_name_{}'.format(k)] = dict(df['utilizationMetrics'][i][k])['name'] df.at[i,'utilizationMetrics_statistic_{}'.format(k)] = dict(df['utilizationMetrics'][i][k])['statistic'] df.at[i,'utilizationMetrics_value_{}'.format(k)] = dict(df['utilizationMetrics'][i][k])['value'] for m in range(len(df['recommendationOptions'][i])): df.at[i,'recommendationOptions_instanceType_{}'.format(m)] = dict(df['recommendationOptions'][i][m])['instanceType'] df.at[i,'recommendationOptions_performanceRisk_{}'.format(m)] = dict(df['recommendationOptions'][i][m])['performanceRisk'] df.at[i,'recommendationOptions_rank_{}'.format(m)] = dict(df['recommendationOptions'][i][m])['rank'] for j in range(len(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'])): df.at[i,'reco_projectedUtilizationMetrics_{}_name_{}'.format(m,j)] = dict(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'][j])['name'] df.at[i,'reco_projectedUtilizationMetrics_{}_statistic_{}'.format(m,j)] = dict(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'][j])['statistic'] df.at[i,'reco_projectedUtilizationMetrics_{}_value_{}'.format(m,j)] = dict(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'][j])['value'] df = df.drop({'utilizationMetrics','recommendationOptions'}, axis=1) df.to_csv(csvfile, header=True,index=False) print("CSV File generated at..- {}".format(csvfile))
Shell Script (which generates the json file which then parsed to python script to generate the CSV file)
#!/bin/sh if [[ $# -lt 1 ]]; then echo "Usage: ${0} <AccountID> [<Region>]" exit 1 fi NOW=$(date +"%m%d%Y%H%M") AccountID=${1} AWS_DEFAULT_REGION=${2} ## 3rd Argument is the Account Default Region is diff than the CLI server script_top=/d01/app/aws_script/bin outputdir=/d01/app/aws_script/output csvfile=${outputdir}/${AccountID}_copt-${NOW}.csv jsonfile=${outputdir}/${AccountID}_copt-${NOW}.json # ## Reset Env variables reset_env () { unset AWS_SESSION_TOKEN unset AWS_DEFAULT_REGION unset AWS_SECRET_ACCESS_KEY unset AWS_ACCESS_KEY_ID } #end of reset_env ## Set Env function assume_role () { AccountID=${1} source </path_to_source_env_file/filename> ${AccontID} } # Function assume_role ends assume_role ${AccountID} if [[ ! -z "$2" ]]; then AWS_DEFAULT_REGION='us-east-2' fi # ## Generate json file aws compute-optimizer get-ec2-instance-recommendations | jq -r . >${jsonfile} ## Pass the json file to python script along with the CSV File for the output python ${script_top}/reportComputeOptData.py ${jsonfile} ${csvfile} echo "CSV File generated... - ${csvfile}" reset_env
Json file format
{
“instanceRecommendations”: [
{
“instanceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-04a67rqw6c029b82f”,
“accountId”: “123404238928”,
“instanceName”: “testserver01”,
“currentInstanceType”: “c4.xlarge”,
“finding”: “OVER_PROVISIONED”,
“utilizationMetrics”: [
{
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 6.3559322033898304
}
],
“lookBackPeriodInDays”: 14,
“recommendationOptions”: [
{
“instanceType”: “t3.large”,
“projectedUtilizationMetrics”: [
{
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 12.711864406779661
}
],
“performanceRisk”: 3,
“rank”: 1
},
{
“instanceType”: “m5.large”,
“projectedUtilizationMetrics”: [
{
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 12.711864406779661
}
],
“performanceRisk”: 1,
“rank”: 2
},
{
“instanceType”: “m4.large”,
“projectedUtilizationMetrics”: [
{
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 15.645371577574968
}
],
“performanceRisk”: 1,
“rank”: 3
}
],
“recommendationSources”: [
{
“recommendationSourceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-04a67rqw6c029b82f”,
“recommendationSourceType”: “Ec2Instance”
}
],
“lastRefreshTimestamp”: 1583986171.637
},
{
“instanceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-0af6a6b96e2690002”,
“accountId”: “123404238928”,
“instanceName”: “TestServer02”,
“currentInstanceType”: “t2.micro”,
“finding”: “OPTIMIZED”,
“utilizationMetrics”: [
{
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 96.27118644067791
}
],
“lookBackPeriodInDays”: 14,
“recommendationOptions”: [
{
“instanceType”: “t3.micro”,
“projectedUtilizationMetrics”: [
{
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 39.1101694915254
}
],
“performanceRisk”: 1,
“rank”: 1
},
{
“instanceType”: “t2.micro”,
“projectedUtilizationMetrics”: [
{
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 96.27118644067791
}
],
“performanceRisk”: 1,
“rank”: 2
}
],
“recommendationSources”: [
{
“recommendationSourceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-0af6a6b96e2690002”,
“recommendationSourceType”: “Ec2Instance”
}
],
“lastRefreshTimestamp”: 1583986172.297
}
],
“errors”: []
}
Enjoy reading !!!
Anand M
Error – gpg: cancelled by user/gpg: Key generation canceled.
While generating gpg key, I was getting error where the screen automatically goes off and the control immediately comes back stating below
gpg: cancelled by user
gpg: Key generation canceled.
-bash-4.2$ gpg --gen-key gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 4 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) Requested keysize is 2048 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (0) Key does not expire at all Is this correct? (y/N) y GnuPG needs to construct a user ID to identify your key. Real name: svc_WellsFargo Email address: user@domain.com.com Comment: You selected this USER-ID: "svc_WellsFargo <user@domain.com.com>" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key. gpg: cancelled by user gpg: Key generation canceled.
Solution Applied: It bugged a lot and finally googled the solution (putting it here for the sake of everyone)
As a root user, run below command
$ chmod o+rw $(tty)
Happy reading !!!
Anand M
EBS- SSO Integration with Oracle Identity Cloud Service (IDCS)
Recently got an opportunity to do a POC for implementing SSO with Oracle EBS (12.2.5) using Oracle IDCS approach. It’s fairly simple and much less intrusive work as far as work within eBS is concerned.
One primary component for this solution is EBS Asserter which needs to be deployed and configured in DMZ (Security policy does not allow any core EBS component to be exposed in DMZ)
This is fully integrated solution with inhouse Active Directory and not exposing any critical data (user password) in Cloud. POC was completely successful. Below is the data flow between various components of EBS and Oracle IDCS.
Happy reading !!!
Anand M
Collect Cloudwatch metrics (including custom one) and upload to S3 bucket
Recently I wrote a script to pull the cloudwatch metrics (including the custom ones – Memory utilization) using CLI. Objective is to have have the data published to S3 and then using Athena/QuickSight, create a dashboard so as to have a consolidated view of all the servers across All the AWS accounts for CPU and Memory utilization.
This dashboard will help to take a right decision on resizing the instances thereby optimizing the overall cost.
Script is scheduled (using crontab) to run every one hour. There are 2 parts of the script
1. collect_cw_metrics.py – This is the main script
2. collect_cw_metrics.sh – This is a wrapper and internally calls python script.
How the script is called :
/path/collect_cw_metrics.sh <Destination_AWS_Account ID> <S3_Bucket_AWS_Account_ID> [<AWS_Region>]
Wrapper script – collect_cw_metrics.sh
#!/bin/sh if [[ $# -lt 2 ]]; then echo "Usage: ${0} <AccountID> <S3_Bucket_AccountID>" exit 1 fi NOW=$(date +"%m%d%Y%H%M") AccontID=${1} s3_AccountID=${2} AWS_DEFAULT_REGION=${3} ## 3rd Argument is the Account Default Region is diff than the CLI server csvfile=/tmp/cw-${AccontID}-${NOW}.csv # ## Reset Env variables reset_env () { unset AWS_SESSION_TOKEN unset AWS_DEFAULT_REGION unset AWS_SECRET_ACCESS_KEY unset AWS_ACCESS_KEY_ID } #end of reset_env ## Set Env function assume_role () { AccontID=${1} source </path_to_source_env_file/filename> ${AccontID} } # Function assume_role ends assume_role ${AccontID} if [[ ! -z "$3" ]]; then AWS_DEFAULT_REGION='us-east-2' fi # ## Generate CSV file python <path_of_the_script>/collect_cw_metrics.py ${AccontID} ${csvfile} ## ## Upload generated CSV file to S3 reset_env assume_role ${s3_AccountID} echo ${csvfile} echo "Uploading data file to S3...." aws s3 cp ${csvfile} <Bucket_Name> reset_env
Main python Script – collect_cw_metrics.py
#!/usr/bin/python # To Correct indent in the code - autopep8 cw1.py import sys import boto3 import logging import pandas as pd import datetime from datetime import datetime from datetime import timedelta AccountID = str(sys.argv[1]) csvfile = str(sys.argv[2]) logger = logging.getLogger() logger.setLevel(logging.INFO) # define the connection client = boto3.client('ec2') ec2 = boto3.resource('ec2') cw = boto3.client('cloudwatch') # Function to get instance Name def get_instance_name(fid): ec2instance = ec2.Instance(fid) instancename = '' for tags in ec2instance.tags: if tags["Key"] == 'Name': instancename = tags["Value"] return instancename # Function to get instance ID (mandatory for Custom memory Datapoints) def get_instance_imageID(fid): rsp = client.describe_instances(InstanceIds=[fid]) for resv in rsp['Reservations']: v_ImageID = resv['Instances'][0]['ImageId'] return v_ImageID # Function to get instance type (mandatory for Custom memory Datapoints) def get_instance_Instype(fid): rsp = client.describe_instances(InstanceIds=[fid]) for resv in rsp['Reservations']: v_InstanceType = resv['Instances'][0]['InstanceType'] return v_InstanceType # all running EC2 instances. filters = [{ 'Name': 'instance-state-name', 'Values': ['running'] } ] # filter the instances instances = ec2.instances.filter(Filters=filters) # locate all running instances RunningInstances = [instance.id for instance in instances] # print(RunningInstances) dnow = datetime.now() cwdatapointnewlist = [] for instance in instances: ec2_name = get_instance_name(instance.id) imageid = get_instance_imageID(instance.id) instancetype = get_instance_Instype(instance.id) cw_response = cw.get_metric_statistics( Namespace='AWS/EC2', MetricName='CPUUtilization', Dimensions=[ { 'Name': 'InstanceId', 'Value': instance.id }, ], StartTime=dnow+timedelta(hours=-1), EndTime=dnow, Period=300, Statistics=['Average', 'Minimum', 'Maximum'] ) cw_response_mem = cw.get_metric_statistics( Namespace='CWAgent', MetricName='mem_used_percent', Dimensions=[ { 'Name': 'InstanceId', 'Value': instance.id }, { 'Name': 'ImageId', 'Value': imageid }, { 'Name': 'InstanceType', 'Value': instancetype }, ], StartTime=dnow+timedelta(hours=-1), EndTime=dnow, Period=300, Statistics=['Average', 'Minimum', 'Maximum'] ) cwdatapoints = cw_response['Datapoints'] label_CPU = cw_response['Label'] for item in cwdatapoints: item.update({"Label": label_CPU}) cwdatapoints_mem = cw_response_mem['Datapoints'] label_mem = cw_response_mem['Label'] for item in cwdatapoints_mem: item.update({"Label": label_mem}) # Add memory datapoints to CPUUtilization Datapoints cwdatapoints.extend(cwdatapoints_mem) for cwdatapoint in cwdatapoints: timestampStr = cwdatapoint['Timestamp'].strftime( "%d-%b-%Y %H:%M:%S.%f") cwdatapoint['Timestamp'] = timestampStr cwdatapoint.update({'Instance Name': ec2_name}) cwdatapoint.update({'Instance ID': instance.id}) cwdatapointnewlist.append(cwdatapoint) df = pd.DataFrame(cwdatapointnewlist) df.to_csv(csvfile, header=False, index=False)
Unable to start WLST: “Problem invoking WLST – Traceback (most recent call last)”
The above error means that the correct CLASSPATH has not been set in order to start WLST because of environment not being properly set. The same error can occur for other utilities such as weblogic.Deployer, utils.dbping or even weblogic.Server etc. I was using this decrypt weblogic admin password.
Exact Error was as below
$ java weblogic.WLST Initializing WebLogic Scripting Tool (WLST) ... Problem invoking WLST - Traceback (most recent call last): File "/tmp/WLSTOfflineIni7376338350886613784.py", line 3, in <module> import os ImportError: No module named os -bash-4.1$
The root cause is always the same, Correct environment not being properly Set and hence the required classes are not loaded.
To fix this, set your environment explicitly using
1. Make sure the right environment is set by running the setWLSEnv.sh script. This is located under @MW_HOME/wlserver/server/bin
The setWLSEnv.sh script needs to be sourced in order to execute the script in the context of the running shell, and have the actual environment of the shell be set. It has to be executed as either ” ./setWLSEnv.sh” (notice the extra dot) or “source ./setWLSEnv.sh” and then the CLASSPATH can be confirmed in the current shell with the “env” command.
Simply executing the script “./setWLSEnv.sh” will only display the output on the screen.
Once the environment is set, I was able to run the java weblogic.WLST successfully
-bash-4.1$ . ./setWLSEnv.sh CLASSPATH=/xxxxxx/fmw/jdk1.7.0_51/lib/tools.jar:/xxxxxx/fmw/product/12/wlserver/server/lib/weblogic_sp.jar:/xxxxxx/fmw/product/12/wlserver/server/lib/weblogic.jar:/xxxxxx/fmw/product/12/wlserver/server/webservices.jar:/xxxxxx/fmw/product/12/oracle_common/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/xxxxxx/fmw/product/12/oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar:/xxxxxx/product/12/wlserver/modules/features/oracle.wls.common.nodemanager_1.0.0.0.jar: PATH=/xxxxxx/fmw/product/12/wlserver/server/bin:/xxxxxx/fmw/product/12/oracle_common/modules/org.apache.ant_1.7.1/bin:/xxxxxx/fmw/jdk1.7.0_51/jre/bin:/xxxxxx/fmw/jdk1.7.0_51/bin:/xxxxxx/fmw/jdk1.7.0_51//usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/xxxxxx/fmw/product/12/oracle_common/modules/org.apache.maven_3.0.4/bin Your environment has been set. -bash-4.1$ java weblogic.WLST Initializing WebLogic Scripting Tool (WLST) ... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>
Hope this helps.
-Anand M
How to install Oracle Apex 5.1 and deploy to Apache Tomcat Application Server on linux (RHEL 6.7)
Requirement was to have the Apex launched from Tomcat and not using Standalone Apex installation.
Following components are needed for this setup
1.Installation of Oracle Apex 5.1
Download it from http://www.oracle.com/technetwork/developer-tools/apex/downloads/index.html
Mine is 5.1.4 English Language ONLY
2.Installation and configuration of Apache Tomcat 9.0.6
Download it from https://tomcat.apache.org/
2.1 Installation of Java JDK 9.0.1
Download it from https://java.com/en/download/
3.Installation and configuration of orace Rest Data service (ORDS 17.4)
Download it from http://www.oracle.com/technetwork/developer-tools/index.html
Assumption
Oracle EE 11gR2 is already installed and configured. Mine was already installed don AIX 6.3 box.
1.By default Oracle Apex comes as bundled with Oracle EE is located in $ORACLE_HOME/apex.
Since I needed to install 5.1, I renamed the existing “apex” folder and kept the the downloaded the zipped file (apex_5.1.4_en.zip) to some temp location on the server (/tmp/apex_5.1.4_en.zip)
2.unzipped the file to $ORACLE_HOME
$ cd $ORACLE_HOME
$ unzip /tmp/apex_5.1.4_en.zip — this will create the new apex folder which
3.Create Tablespace in database for storing Apex MetaData
Login to Database as sysdba and issue below command to create a tablespace called “APEX”
CREATE TABLESPACE APEX DATAFILE ‘/d03/oracle/oradata/apex_01.dbf’
SIZE 200M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 1000M LOGGING
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
4. Now we need to install Apex 5.1. Change to $ORACLE_HOME/apex directory and login to SQL DB as sysdba
SQL> !pwd
/d01/oracle/product/11.2.0.4/apex
SQL>@apexins.sql APEX APEX TEMP /i/
Here APEX is the TBS created in Step#2 and TEMP is the temporary tablespace.
Thank you for installing Oracle Application Express 5.1.4.00.08
Oracle Application Express is installed in the APEX_050100 schema.
The structure of the link to the Application Express administration services is as follows:
http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
http://host:port/apex/apex_admin (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex/apex_admin (Oracle REST Data Services)
The structure of the link to the Application Express development interface is as follows:
http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
http://host:port/apex (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex (Oracle REST Data Services)
timing for: Phase 3 (Switch)
Elapsed: 00:01:54.23
timing for: Complete Installation
Elapsed: 00:13:58.41
PL/SQL procedure successfully completed.
5. Connect again to the Database using “sysdba” and move to the same $ORACLE_HOME/apex location
and run apex_rest_config.sql
SQL> !pwd
/d01/oracle/product/11.2.0.4/apex
SQL>@apex_rest_config.sql –> this will ask for password for APEX_LISTENER_USER & APEX_REST_PUBLIC_USER
Enter a password for the APEX_LISTENER user [] apex4demo
Enter a password for the APEX_REST_PUBLIC_USER user [] apex4demo
I kept the same password “apex4demo” for the entire installation.
After the successful run of the above script,
==
Synonym created.
Session altered.
PL/SQL procedure successfully completed
==
6.Now we need to stop the http port as the plan is to launch the Apex using Tomcat Apace and not standalone.
SQL> EXEC dbms_xdb.sethttpport(0);
PL/SQL procedure successfully completed.
7.Login to DB again as sysdba to unlock the user account.
SQL> alter user apex_public_user identified by apex4demo account unlock;
User altered.
SQL> alter user APEX_REST_PUBLIC_USER identified by apex4demo account unlock;
User altered.
8.Installation of Tomcat & Java JDK
This installation was done on another Linux server (RHEL 6.7) from where Apex will be launched.
Confirmed – Both the server (Apex hosting DB server and the tomcat server are able to talk to each other)
8.1 create a directory – /d01/apex
unzipped the tomcat apaeche zipped file & Java JDK
$ cd /d01/apex
$ tar xzf /tmp/apache-tomcat-9.0.6.tar.gz
$ tar xzf /tmp/jdk-9.0.1_linux-x64_bin.tar.gz
Modify the tomcat config file (/d01/apex/apache-tomcat-9.0.6/conf/server.xml) so as to use different port (default is 8080)
This is needed in my case as there is already another application running on tomcat on the default port.
Server port –> Changed to 8105 (Default is 8005)
Connector port –> Changed to 8181 (default is 8080)
Connector port (Define an AJP 1.3 Connector on port 8009) –> Changed to 8109 (Default is 8009)
Save the config file
Start the tomcat service
$startup/shutdown scritps are located in /d01/apex/apache-tomcat-9.0.6/bin
a)startup.sh &
b)shutdown.sh
Before services are stopped/started, make sure to set following
export JAVA_HOME=/d01/apex/jdk-9.0.1
export CATALINA_HOME=/d01/apex/apache-tomcat-9.0.6
export CATALINA_BASE=$CATALINA_HOME
$CATALINA_HOME/bin/startup.sh
Check the catalina log ($CATALINA_HOME/logs/catalina.out) for any issue.
If the tomcat is successfully started, launch the IE and put the URL as –
http:/<server_name>:<port_number> (Pls remember it is 8181 for my case else default is 8080)
9. Install Oracle Rest Data service (ords)
9.1 Create a directory /d01/ords
9.2 Unzip the ords.17.4.1.353.06.48.zip
$ cd /d01/ords
$ unzip /tmp/ords.17.4.1.353.06.48.zip
Once unzipped, “/d01/ords/” directory will have a ‘war’ file called “ords.war”
By default, apex launc URL will look like – http://<server_name>:8181/ords, but if you like to give some meaningful name in the URL, you MUST change the name of this ‘war’ file accordingly.
For my case, I wanted my URL to look like – http://<server_name>:8181/demo,
renamed the ‘war’ file to “demo.war”.
9.3 Rename the parameter file (within /d01/ords/params) and update the settings as per the actual environment.
Default name – ords_params.properties
Since the war file is renamed, it is mandatory to change the parameter file name as well – demo_params.properties
Contents of parameter file
db.hostname=<Server name where the Database is installed>
db.port=<DB Port #>
db.servicename=<SID or Service Name of the database>
db.sid=<SID or Service Name of the database>
db.username=APEX_PUBLIC_USER
migrate.apex.rest=false
rest.services.apex.add=
rest.services.ords.add=true
schema.tablespace.default=<Tablespace created in Step 1>
schema.tablespace.temp=<Existing TEMP Tablespace in the DB>
standalone.http.port=<Port number where the tomcat Apache is running>
standalone.static.images=
user.tablespace.default=USERS
user.tablespace.temp=TEMP
9.4 Create a folder called “config” inside “/d01/ords”
9.4.i> Set the configuration directory to “config” for demo.war
$ export JAVA_HOME=/d01/apex/jdk-9.0.1
$ $JAVA_HOME/bin/java -jar demo.war configdir /d01/ords/config
Above command resulted in below error
java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
at java.base/java.lang.Class.getDeclaredFields0(Native Method)
at java.base/java.lang.Class.privateGetDeclaredFields(Class.java:3024
This is due to the fact – “JDK 9 has deprecated java.xml.bind module and has removed from default classpath” and hence to overcome the error, workaround is to
Use –add-modules to add module in classpath
Hence correct command
$ $JAVA_HOME/bin/java –add-modules java.xml.bind -jar alqd.war configdir /d01/ords/config-ords
Mar 26, 2018 4:03:26 PM
INFO: Set config.dir to /d01/ords/config in: /d01/ords/demo.war
9.5.Now execute the Oracle Rest Data service config script
$ cd /d01/ords
$ $JAVA_HOME/bin/java –add-modules java.xml.bind -jar demo.war
Below information are asked
Enter the name of the database server [localhost]: <Press Enter>; Enter the database listen port [1521]: <Press enter if your port is 1521, if others then put the value and press enter> Enter 1 to specify the database service name, or 2 to specify the database SID [1]: <Press Enter> Enter the database service name:Demo (I have put DEMO my service name, please put your db service name) Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]: <Press Enter> Enter the database password for ORDS_PUBLIC_USER:apex4demo Enter the database password for sys: sys123 If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]: <Press Enter> Enter the PL/SQL Gateway database user name [APEX_PUBLIC_USER]: <Press Enter> Enter the database password for APEX_PUBLIC_USER:apex4demo Enter 1 to specify passwords for Application Express RESTful Services database users (APEX_LISTENER, APEX_REST_PUBLIC_USER) or 2 to skip this step [1]: <Press Enter> Enter the database password for APEX_LISTENER: apex4demo Enter the database password for APEX_REST_PUBLIC_USER: apex4demo Enter 1 if you wish to start in standalone mode or 2 to exit [1]: 2 Requires SYS AS SYSDBA to verify Oracle REST Data Services schema. Enter the database password for SYS AS SYSDBA: Confirm password: Retrieving information. Mar 26, 2018 5:45:08 PM INFO: Updated configurations: defaults, apex, apex_pu, apex_al, apex_rt Installing Oracle REST Data Services version 17.4.1.353.06.48 ... Log file written to /d01/ords/logs/ords_install_core_2018-03-26_174508_00856.log ... Verified database prerequisites ... Created Oracle REST Data Services schema ... Created Oracle REST Data Services proxy user ... Granted privileges to Oracle REST Data Services ... Created Oracle REST Data Services database objects ... Log file written to /d01/ords/logs/ords_install_datamodel_2018-03-26_174523_00352.log ... Log file written to /d01/ords/logs/ords_install_apex_2018-03-26_174524_00312.log Completed installation for Oracle REST Data Services version 17.4.1.353.06.48. Elapsed time: 00:00:16.538
10. Deploying oracle apex to Tomcat Apache
Copy the demo.war file from the directory /d01/ords and paste into the directory /d01/apex/apache-tomcat-9.0.6/webapps
11. Create a folder “i” inside /d01/apex/apache-tomcat-9.0.6/webapps
11.1 Copy the entire contents from $ORACLE_HOME/apex/images to this folder /d01/apex/apache-tomcat-9.0.6/webapps/i
login to the DB server where the apex is installed
$ cd $ORACLE_HOME/apex/images
$ scp * <user_name>@<Tomcat_Server>://d01/apex/apache-tomcat-9.0.6/webapps/i
12. Final step is to restart the tomcat apache service.
But before it is done, due to the fact ‘JDK 9 has deprecated java.xml.bind module and has removed from default classpath’, I had to modify the startup script so as to include this in the class path.
Modify $CATALINA_HOME/logs/catalina.sh
$ # Add the JAVA 9 specific start-up parameters required by Tomcat
JDK_JAVA_OPTIONS=”$JDK_JAVA_OPTIONS –add-modules java.xml.bind” ### Added to overcome the deployment error
Now restart the tomcat
$CATALINA_HOME/bin/startup.sh
13. Launch Apex URL – http://:8181/demo
Hope this helps. Happy reading and learning!
-Anand M