Accessing Other Tencent Cloud Database Resources via Private Network
Overview
If you want your cloud function to securely access other resources in your Tencent Cloud account, such as MySQL, Redis, Kafka, or even services deployed on other CVMs, you can use the "Private Network Connectivity" feature of cloud functions. After enabling the private network connectivity feature, you can access relevant resources within the VPC via private IP addresses in your cloud function.
Advantages of Private Network Connectivity
- High Security: Data transmission does not traverse the public network, reducing security risks.
- Excellent Performance: Low latency, high bandwidth, and fast access speed on the private network.
- Cost Savings: Avoid public network traffic fees.
- Stable and Reliable: The private network environment is more stable, reducing the impacts of network fluctuations.
📄️ Configure Private Network Interconnection
Learn how to configure and enable the private network connectivity feature in cloud functions
📄️ Access Database Service
Detailed example of connecting to various TencentDB services via a private network
📄️ Access Other Services
Connecting to CVM, container service, and other Tencent Cloud resources
📄️ Best Practices
Security configuration, performance optimization, and fault troubleshooting for private network interconnection
Configuring Private Network Interconnection
Prerequisites
- Your Tencent Cloud account already has a VPC network.
- Target resources (such as database instances) have been deployed in the VPC.
- The cloud function and target resources are in the same region.
Configuration Steps
1. Enable private network connectivity
Configure private network connectivity in the cloud function console:
- Log in to the cloud function console
- Select the corresponding function and go to the Function Configuration page
- In the Network Configuration section, click Edit
- Enable the private network connectivity switch
- Select the target VPC and subnet
- Save the configuration
2. Configure the security group
Ensure that the security group rules allow the cloud function to access the target resources:
# Example: Allow cloud function to access MySQL (port 3306)
Entry rules:
- Protocol: TCP
- Port: 3306
- Source: CIDR of the subnet where the cloud function resides (e.g., 10.0.1.0/24)
# Example: Allow cloud function to access Redis (port 6379)
Entry rules:
- Protocol: TCP
- Port: 6379
- Source: CIDR of the subnet where the cloud function resides
3. Obtain the private network address
Obtain the private network access address of the resource in the corresponding cloud service console:
| Service Type | Console Location | Private Network Address Example |
|---|---|---|
| MySQL | Database MySQL > Instance Details | 10.0.1.100:3306 |
| Redis | Database Redis > Instance Details | 10.0.1.101:6379 |
| Kafka | Message Queue CKafka > Instance Details | 10.0.1.102:9092 |
| CVM | Cloud Virtual Machine CVM > Instance Details | 10.0.1.103:80 |
Accessing Database Service
MySQL Database
- Node.js
- Python
const mysql = require('mysql2/promise');
// Use the private network address to connect to MySQL
const pool = mysql.createPool({
host: '10.0.1.100', // MySQL private network address
port: 3306,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
connectionLimit: 10,
acquireTimeout: 60000,
timeout: 60000
});
exports.main = async (event, context) => {
try {
const connection = await pool.getConnection();
try {
// Execute the query
const [rows] = await connection.query('SELECT * FROM users LIMIT 10');
return {
statusCode: 200,
body: {
success: true,
data: rows,
message: 'Query successful'
}
};
} finally {
connection.release();
}
} catch (error) {
console.error('MySQL connection failed:', error);
return {
statusCode: 500,
body: {
success: false,
error: error.message
}
};
}
};
import pymysql
import json
import os
def main_handler(event, context):
try:
# Use the private network address to connect to MySQL
connection = pymysql.connect(
host='10.0.1.100', # MySQL private network address
port=3306,
user=os.environ['DB_USER'],
password=os.environ['DB_PASSWORD'],
database=os.environ['DB_NAME'],
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor
)
with connection:
with connection.cursor() as cursor:
# Execute the query
cursor.execute("SELECT * FROM users LIMIT 10")
result = cursor.fetchall()
return {
'statusCode': 200,
'body': json.dumps({
'success': True,
'data': result,
'message': 'Query successful'
}, ensure_ascii=False)
}
except Exception as e:
print(f'MySQL connection failed: {str(e)}')
return {
'statusCode': 500,
'body': json.dumps({
'success': False,
'error': str(e)
}, ensure_ascii=False)
}
Redis Cache
- Node.js
- Python
const redis = require('redis');
// Create Redis client (using private network address)
const client = redis.createClient({
host: '10.0.1.101', // Redis private network address
port: 6379,
password: process.env.REDIS_PASSWORD,
db: 0,
retry_strategy: (options) => {
if (options.error && options.error.code === 'ECONNREFUSED') {
return new Error('Redis server refused connection');
}
if (options.total_retry_time > 1000 * 60 * 60) {
return new Error('Retry time exceeded 1 hour');
}
if (options.attempt > 10) {
return undefined;
}
return Math.min(options.attempt * 100, 3000);
}
});
exports.main = async (event, context) => {
try {
// Connect to Redis
await client.connect();
const { action, key, value } = event;
let result;
switch (action) {
case 'get':
result = await client.get(key);
break;
case 'set':
await client.set(key, value, 'EX', 3600); // Set to expire in 1 hour
result = 'OK';
break;
case 'del':
result = await client.del(key);
break;
case 'exists':
result = await client.exists(key);
break;
default:
throw new Error('Unsupported operation');
}
return {
statusCode: 200,
body: {
success: true,
data: result,
message: 'Operation successful'
}
};
} catch (error) {
console.error('Redis operation failed:', error);
return {
statusCode: 500,
body: {
success: false,
error: error.message
}
};
} finally {
await client.quit();
}
};
import redis
import json
import os
def main_handler(event, context):
try:
# Connect to Redis using private network address
r = redis.Redis(
host='10.0.1.101', # Redis private network address
port=6379,
password=os.environ.get('REDIS_PASSWORD'),
db=0,
decode_responses=True,
socket_timeout=5,
socket_connect_timeout=5
)
action = event.get('action')
key = event.get('key')
value = event.get('value')
if action == 'get':
result = r.get(key)
elif action == 'set':
r.setex(key, 3600, value) # Set to expire in 1 hour
result = 'OK'
elif action == 'del':
result = r.delete(key)
elif action == 'exists':
result = r.exists(key)
else:
raise ValueError('Unsupported operation')
return {
'statusCode': 200,
'body': json.dumps({
'success': True,
'data': result,
'message': 'Operation successful'
}, ensure_ascii=False)
}
except Exception as e:
print(f'Redis operation failed: {str(e)}')
return {
'statusCode': 500,
'body': json.dumps({
'success': False,
'error': str(e)
}, ensure_ascii=False)
}
Kafka Message Queue
- Node.js
- Python
const { Kafka } = require('kafkajs');
// Create Kafka client (using private network address)
const kafka = Kafka({
clientId: 'scf-kafka-client',
brokers: ['10.0.1.102:9092'], // Kafka private network address
sasl: {
mechanism: 'plain',
username: process.env.KAFKA_USERNAME,
password: process.env.KAFKA_PASSWORD
}
});
exports.main = async (event, context) => {
const { action, topic, message, groupId } = event;
try {
if (action === 'produce') {
// Produce message
const producer = kafka.producer();
await producer.connect();
await producer.send({
topic: topic,
messages: [{
key: Date.now().toString(),
value: JSON.stringify(message),
timestamp: Date.now()
}]
});
await producer.disconnect();
return {
statusCode: 200,
body: {
success: true,
message: 'Message sent successfully'
}
};
} else if (action === 'consume') {
// Consumption message
const consumer = kafka.consumer({ groupId: groupId || 'scf-group' });
await consumer.connect();
await consumer.subscribe({ topic: topic });
const messages = [];
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
messages.push({
topic,
partition,
offset: message.offset,
key: message.key?.toString(),
value: message.value?.toString(),
timestamp: message.timestamp
});
// Limit the message count to avoid timeout
if (messages.length >= 10) {
await consumer.stop();
}
}
});
// Wait for a period to collect messages
await new Promise(resolve => setTimeout(resolve, 5000));
await consumer.disconnect();
return {
statusCode: 200,
body: {
success: true,
data: messages,
message: 'Message consumed successfully'
}
};
}
} catch (error) {
console.error('Kafka operation failed:', error);
return {
statusCode: 500,
body: {
success: false,
error: error.message
}
};
}
};
from kafka import KafkaProducer, KafkaConsumer
import json
import os
from datetime import datetime
def main_handler(event, context):
action = event.get('action')
topic = event.get('topic')
try:
if action == 'produce':
# Producing Messages
producer = KafkaProducer(
bootstrap_servers=['10.0.1.102:9092'], # Kafka private network address
security_protocol='SASL_PLAINTEXT',
sasl_mechanism='PLAIN',
sasl_plain_username=os.environ['KAFKA_USERNAME'],
sasl_plain_password=os.environ['KAFKA_PASSWORD'],
value_serializer=lambda v: json.dumps(v).encode('utf-8')
)
message = event.get('message', {})
message['timestamp'] = datetime.now().isoformat()
future = producer.send(topic, message)
producer.flush()
producer.close()
return {
'statusCode': 200,
'body': json.dumps({
'success': True,
'message': 'Message sent successfully'
}, ensure_ascii=False)
}
elif action == 'consume':
# Consume messages.
consumer = KafkaConsumer(
topic,
bootstrap_servers=['10.0.1.102:9092'],
security_protocol='SASL_PLAINTEXT',
sasl_mechanism='PLAIN',
sasl_plain_username=os.environ['KAFKA_USERNAME'],
sasl_plain_password=os.environ['KAFKA_PASSWORD'],
group_id=event.get('groupId', 'scf-group'),
value_deserializer=lambda m: json.loads(m.decode('utf-8')),
consumer_timeout_ms=5000 # 5-second timeout
)
messages = []
for message in consumer:
messages.append({
'topic': message.topic,
'partition': message.partition,
'offset': message.offset,
'key': message.key.decode('utf-8') if message.key else None,
'value': message.value,
'timestamp': message.timestamp
})
# Limit message count
if len(messages) >= 10:
break
consumer.close()
return {
'statusCode': 200,
'body': json.dumps({
'success': True,
'data': messages,
'message': 'Message consumed successfully'
}, ensure_ascii=False)
}
except Exception as e:
print(f'Kafka operation failed: {str(e)}')
return {
'statusCode': 500,
'body': json.dumps({
'success': False,
'error': str(e)
}, ensure_ascii=False)
}
Accessing Other Services
CVM Server
const axios = require('axios');
exports.main = async (event, context) => {
try {
// Access the HTTP service on CVM (using a private network address)
const response = await axios({
method: 'GET',
url: 'http://10.0.1.103:8080/api/data', // CVM private network address
timeout: 10000,
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.API_TOKEN}`
}
});
return {
statusCode: 200,
body: {
success: true,
data: response.data,
message: 'CVM service call successful'
}
};
} catch (error) {
console.error('CVM service call failed:', error);
return {
statusCode: 500,
body: {
success: false,
error: error.message
}
};
}
};
TKE Container Service
const axios = require('axios');
exports.main = async (event, context) => {
try {
// Accessing the service in the TKE cluster (using a private network address)
const response = await axios({
method: 'POST',
url: 'http://10.0.1.104:80/api/process', // TKE service private network address
data: event.payload,
timeout: 30000,
headers: {
'Content-Type': 'application/json'
}
});
return {
statusCode: 200,
body: {
success: true,
data: response.data,
message: 'TKE service call successful'
}
};
} catch (error) {
console.error('TKE service call failed:', error);
return {
statusCode: 500,
body: {
success: false,
error: error.message
}
};
}
};
Best Practices
Connection Pool Management
// Global connection pool to avoid creating a new connection on every call
let mysqlPool;
let redisClient;
function getMysqlPool() {
if (!mysqlPool) {
mysqlPool = mysql.createPool({
host: process.env.MYSQL_HOST,
port: 3306,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE,
connectionLimit: 5, // Recommended to use a smaller number of connections in the cloud function environment.
acquireTimeout: 60000,
timeout: 60000,
reconnect: true
});
}
return mysqlPool;
}
function getRedisClient() {
if (!redisClient) {
redisClient = redis.createClient({
host: process.env.REDIS_HOST,
port: 6379,
password: process.env.REDIS_PASSWORD,
retry_strategy: (options) => {
if (options.attempt > 3) return undefined;
return Math.min(options.attempt * 100, 3000);
}
});
}
return redisClient;
}
Error Handling and Retry
// Database operations with a retry mechanism
async function executeWithRetry(operation, maxRetries = 3) {
let lastError;
for (let i = 0; i < maxRetries; i++) {
try {
return await operation();
} catch (error) {
lastError = error;
// Determine whether the error is retryable
if (isRetryableError(error) && i < maxRetries - 1) {
const delay = Math.pow(2, i) * 1000; // Exponential backoff
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
throw lastError;
}
function isRetryableError(error) {
const retryableCodes = [
'ECONNRESET',
'ETIMEDOUT',
'ENOTFOUND',
'ECONNREFUSED'
];
return retryableCodes.includes(error.code) ||
error.message.includes('Connection lost');
}
Security Configuration
// Environment Variable Configuration Sample
const config = {
mysql: {
host: process.env.MYSQL_HOST,
port: parseInt(process.env.MYSQL_PORT) || 3306,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE
},
redis: {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT) || 6379,
password: process.env.REDIS_PASSWORD
},
kafka: {
brokers: process.env.KAFKA_BROKERS?.split(',') || [],
username: process.env.KAFKA_USERNAME,
password: process.env.KAFKA_PASSWORD
}
};
// Configuration Validation
function validateConfig() {
const required = [
'MYSQL_HOST', 'MYSQL_USER', 'MYSQL_PASSWORD', 'MYSQL_DATABASE',
'REDIS_HOST', 'REDIS_PASSWORD',
'KAFKA_BROKERS', 'KAFKA_USERNAME', 'KAFKA_PASSWORD'
];
const missing = required.filter(key => !process.env[key]);
if (missing.length > 0) {
throw new Error(`Missing required environment variables: ${missing.join(', ')}`);
}
}
Performance Monitoring
// Performance monitoring decorator
function withMonitoring(operation, operationName) {
return async function(...args) {
const startTime = Date.now();
const requestId = Math.random().toString(36).substr(2, 9);
console.log(`[${requestId}] ${operationName} started`);
try {
const result = await operation.apply(this, args);
const duration = Date.now() - startTime;
console.log(`[${requestId}] ${operationName} completed: ${duration}ms`);
// Log slow operations
if (duration > 5000) {
console.warn(`[${requestId}] Slow operation detected: ${operationName} ${duration}ms`);
}
return result;
} catch (error) {
const duration = Date.now() - startTime;
console.error(`[${requestId}] ${operationName} failed: ${duration}ms`, error.message);
throw error;
}
};
}
// Usage example
const monitoredMysqlQuery = withMonitoring(
async (sql, params) => {
const pool = getMysqlPool();
const connection = await pool.getConnection();
try {
return await connection.query(sql, params);
} finally {
connection.release();
}
},
'MySQL Query'
);
Troubleshooting
Frequently Asked Questions
Cannot connect to private network resources
Possible causes:
- Private network interconnection not properly configured
- Security group rules are blocking access
- The target service is not started or the address is incorrect.
Troubleshooting steps:
- Check whether the VPC configuration of the cloud function is correct
- Verify whether security group rules allow the corresponding ports
- Confirm the private network address and port of the target resource
- Use ping or telnet in the cloud function to test connectivity
// Connectivity test code
const net = require('net');
function testConnection(host, port, timeout = 5000) {
return new Promise((resolve, reject) => {
const socket = new net.Socket();
socket.setTimeout(timeout);
socket.on('connect', () => {
socket.destroy();
resolve(true);
});
socket.on('timeout', () => {
socket.destroy();
reject(new Error('Connection timeout'));
});
socket.on('error', (error) => {
reject(error);
});
socket.connect(port, host);
});
}
Frequent connection disconnections
Solution:
- Configure connection pool and reconnection mechanism
- Set an appropriate timeout period
- Implement health checks
// Health check sample
async function healthCheck() {
try {
await testConnection(process.env.MYSQL_HOST, 3306);
await testConnection(process.env.REDIS_HOST, 6379);
return { status: 'healthy' };
} catch (error) {
return { status: 'unhealthy', error: error.message };
}
}
Monitoring and Logging
// Unified logging
function logger(level, message, extra = {}) {
const logEntry = {
timestamp: new Date().toISOString(),
level,
message,
requestId: context.requestId,
...extra
};
console.log(JSON.stringify(logEntry));
}
// Usage example
exports.main = async (event, context) => {
logger('INFO', 'Function execution started', { event });
try {
//Business logic.
const result = await processRequest(event);
logger('INFO', 'Function executed successfully', { result });
return result;
} catch (error) {
logger('ERROR', 'Function execution failed', { error: error.message, stack: error.stack });
throw error;
}
};
Related Documentation
📄️ Private Network Interconnection Configuration
Detailed private network interconnection feature configuration guide
📄️ MySQL Database
Detailed usage of MySQL Database
- The private network interconnection feature only supports access to resources in the same region.
- It is recommended to use connection pools to improve performance and resource utilization.
- Critical operations must implement retry mechanisms and error handling
- Regularly monitor connection status and performance metrics
- Ensure security group rules are correctly configured to avoid security risks
- The private network address may change. It is recommended to use a domain name or configuration center.
- Note the execution time limit of cloud functions to avoid long connections.
- Must configure appropriate timeout periods and retry policies in production environments