Configuring Oracle Database in Kubernetes


This document will guide you through configuring a Oracle database in a Kubernetes environment. It will explain the deployment steps in Kubernetes and the corresponding configurations for Oracle.

Preparation Steps

Start by visiting Oracle and logging in.
Click on 'Database'.

0

Select 'enterprise'.

0

On the right side, click 'Agree'.

0

Install Docker on the server following the steps outlined in Creating Debian OS and Installing Docker on GCP Platform. After installation, log in to container-registry.oracle.com.

Copy Successful
# If permissions are insufficient, please first change the permissions: sudo chmod 777 /var/run/docker.sock
docker login container-registry.oracle.com
Copy Successful
# Create a token secret based on config.json. The following command includes a namespace. If no namespace is specified, it will be created in the default namespace.
kubectl create secret generic oracle --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=database

Creating Relevant Configuration Files

Before creating relevant configuration files, it is recommended to create a folder to centrally store these files. Additionally, create a namespace to store these resources to avoid conflicts with other resources.

In this example, I am creating a folder named 'database' and a namespace named 'database,' and I will create the relevant configuration files in this folder.

Copy Successful
# Create a folder
# <path> is the path to the database storage
# Example: /mnt/disks/data/database
mkdir database <path>/oracle <path>/oracle/oradata
Copy Successful
# Change the folder permissions.
# <path> is the path to the database storage
# Example: /mnt/disks/data/database
sudo chmod 777 <path>/oracle <path>/oracle/oradata
Copy Successful
# Create a namespace
kubectl create namespace database

You can also label nodes and specify nodeSelector in the deployment to deploy the application only on specified nodes.

In this example, I am labeling a node with the 'database' label and specifying nodeSelector in the deployment.

Copy Successful
# Label the node
# <node-name> is the node server name.
kubectl label node <node-name> worker=database

Create the storage.yaml file with the following content:

Copy Successful
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: database
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain

Create the volume.yaml file with the following content:

Copy Successful
# <path> is the path to the database storage
# Example: path: "/mnt/disks/data/database"
apiVersion: v1
kind: PersistentVolume
metadata:
name: database
labels:
app: database
spec:
storageClassName: database
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
hostPath:
path: <path>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database
namespace: database
labels:
app: database
spec:
storageClassName: database
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi

Create the config.yaml file with the following content:

Copy Successful
# <password> Set your password.
apiVersion: v1
kind: ConfigMap
metadata:
name: oracle
namespace: database
labels:
app: oracle
data:
ORACLE_PWD: <password>

Create the deployment.yaml file with the following content:

Copy Successful
apiVersion: apps/v1
kind: Deployment
metadata:
name: oracle
namespace: database
labels:
app: oracle
spec:
replicas: 1
selector:
matchLabels:
app: oracle
template:
metadata:
labels:
app: oracle
spec:
nodeSelector:
worker: database
containers:
- name: oracle
image: container-registry.oracle.com/database/enterprise:latest
ports:
- name: listener
containerPort: 1521
envFrom:
- configMapRef:
name: oracle
volumeMounts:
- mountPath: /opt/oracle/oradata
name: oracle
subPath: oracle/oradata
- mountPath: /opt/oracle/scripts/setup
name: oracle
subPath: oracle/scripts/setup
- mountPath: /opt/oracle/scripts/startup
name: oracle
subPath: oracle/scripts/startup
volumes:
- name: oracle
persistentVolumeClaim:
claimName: database

Create the service.yaml file with the following content:

Copy Successful
# For external connectivity, NodePort is used here. If internal connectivity is sufficient, ClusterIP can be used.
apiVersion: v1
kind: Service
metadata:
name: oracle
namespace: database
labels:
app: oracle
spec:
type: NodePort
ports:
- name: oracle
port: 1521
nodePort: 1521
targetPort: 1521
selector:
app: oracle
Copy Successful
# Deploy storage
kubectl apply -f storage.yaml
Copy Successful
# Deploy volume
kubectl apply -f volume.yaml
Copy Successful
# Deploy config
kubectl apply -f config.yaml
Copy Successful
# Deploy deployment
kubectl apply -f deployment.yaml
Copy Successful
# Deploy service
kubectl apply -f service.yaml

Database Configuration

Copy Successful
# After creating listener.ora, save and close it. Pay attention to formatting and spaces, as incorrect formatting may prevent it from starting.
sudo vim listener.ora
Copy Successful
# listener.ora
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = ORCLCDB)
(ORACLE_HOME = /opt/oracle/product/21c/dbhome_1)
(GLOBAL_DBNAME = ORCLPDB1)
)
)
DEDICATED_THROUGH_BROKER_LISTENER=ON
DIAG_ADR_ENABLED = off
Copy Successful
# Copy listener.ora to the pod.
# <pod-name> is the name of the Oracle pod.
# <oracle-home> is the $ORACLE_HOME path inside the pod, for example: /opt/oracle/homes/OraDB21Home1。
kubectl cp listener.ora <pod-name>:<oracle-home>/network/admin/listener.ora
Copy Successful
# Enter the oracle pod
# <pod-name> is the oracle pod name.
kubectl exec --stdin --tty <pod-name> -n database -- /bin/bash
Copy Successful
# Disable the listener.
lsnrctl stop
Copy Successful
# Enable the listener.
lsnrctl start
Copy Successful
# Log in as sysdba.
sqlplus / as sysdba;
Copy Successful
# Switch to the PDB container.
alter session set container = ORCLPDB1;
Copy Successful
# Create a user.
# <user> is the username.
# <password> is the password.
create user <user> identified by "<password>";
Copy Successful
# Grant user permissions (connect: connect, resource: developer, dba: administrator).
# <user> is the username.
grant connect, resource, dba to <user>;

GCP Firewall Configuration

If you are using GCP and opening connections to the external world, you need to add a rule in the firewall to allow connections on port 1521.

After logging into the GCP platform, navigate to the left-hand menu:

Virtual Private Cloud > Firewall

0

Click the 'Create Firewall Rule' button at the top and add a rule to allow TCP port 1521 connections.

0

Test the Connection

Since NodePort is used in this example, you can connect directly from outside using the node's IP. You can test the connection using commonly used database tools such as DataGrip, Navicat, etc.

The login account and password are the ones you created for the user.

Other

The default expiration for the user password is 180 days. You can use the following commands to change it to unlimited.

Copy Successful
# Log in as sysdba.
sqlplus / as sysdba;
Copy Successful
# Check the expiration setting for the user password (usually the default configuration is DEFAULT).
SELECT * FROM dba_profiles WHERE profile='DEFAULT' AND resource_name='PASSWORD_LIFE_TIME';
Copy Successful
# Change the password.
# <user> is the username.
# <password> is the new password.
alter user <user> identified by "<password>";
Copy Successful
# Set the expiration to unlimited.
ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED;