Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backmerge from main #186

Merged
merged 28 commits into from
Jan 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
c1776ea
[zarya] hotfix for subscriber roaming check
davidkneipp Jan 6, 2024
cf895f6
Fix for API showing incorrect model on ims_subscriber
davidkneipp Jan 6, 2024
31967aa
Always authorize sos apn via Gx
davidkneipp Jan 10, 2024
63fe3ff
Store emergency subscriber data
davidkneipp Jan 10, 2024
3072fe9
Emergency roaming sos
davidkneipp Jan 11, 2024
6c41b0b
Fix for STA / Emergency SOS
davidkneipp Jan 12, 2024
612655e
Change loading order of database tables
nickvsnetworking Jan 12, 2024
e37e323
Removed reference to emergency session ID
nickvsnetworking Jan 12, 2024
becbc85
Fix PLMN Decoding
nickvsnetworking Jan 12, 2024
a9ff02e
Encode PLMN update
nickvsnetworking Jan 12, 2024
9157c93
Log if no serving APN set
nickvsnetworking Jan 12, 2024
3d4f5d7
Added additional logging for re-auth testing
nickvsnetworking Jan 12, 2024
03245ad
Better logging and fix EncodePLMN
nickvsnetworking Jan 12, 2024
edeca3a
Fix for redis additional peers undefined
davidkneipp Jan 13, 2024
72a4b17
SDP parsing
nickvsnetworking Jan 14, 2024
eff246b
Add redis import to diameter
davidkneipp Jan 15, 2024
213f43f
Merge pull request #183 from nickvsnetworking/ReAuth_Testing
davidkneipp Jan 15, 2024
63ca0f0
Start cleanup for 1.0.1, namespace update
davidkneipp Jan 15, 2024
50eee27
Pull in RAR fixes
davidkneipp Jan 16, 2024
81afdf5
Move emergency subscriber from redis model to sqlalchemy
davidkneipp Jan 16, 2024
1e957b8
Emergency subscriber migration complete, pre testing
davidkneipp Jan 16, 2024
00e7d23
Remove stray curly brace
davidkneipp Jan 16, 2024
1fdcf42
Rely on IP instead of ID for emergency_subscriber geored
davidkneipp Jan 16, 2024
11573c6
Working geored for emergency_subscriber
davidkneipp Jan 16, 2024
221ee36
Update changelog
davidkneipp Jan 16, 2024
5b3a767
Fix for Dedicated Bearers on MO and MT call legs
davidkneipp Jan 18, 2024
03e8c17
Fix merge conflicts
davidkneipp Jan 21, 2024
be6cfcb
Merge pull request #185 from nickvsnetworking/pre_1_0_1
davidkneipp Jan 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Gx RAR now dynamically creates TFT up to 512k based on UE request.
- SQN Resync now propogates via Geored when enabled
- Renamed sh_profile to xcap_profile in ims_subscriber
- Rebuilt keys using unique namespace for redis-sentinel / stateless compatibility.

### Fixed

Expand All @@ -30,6 +31,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- generateUpgade.sh for generating alembic upgrade scripts
- Control of outbound roaming S6a AIR and ULA responses through roaming_rule and roaming_network objects.
- Roaming management on a per-subscriber basis, through subscriber.roaming_enabled and subscriber.roaming_rule_list.
- Support for Gx and Rx auth of unknown subscribers attaching via SOS.
- Preliminary support for SCTP.

## [1.0.0] - 2023-09-27

Expand Down
31 changes: 24 additions & 7 deletions config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
## HSS Parameters
hss:
hss:
# Transport Type. "TCP" and "SCTP" are valid options.
# Note: SCTP works but is still experimental. TCP has been load-tested and performs in a production environment.
transport: "TCP"
#IP Addresses to bind on (List) - For TCP only the first IP is used, for SCTP all used for Transport (Multihomed).
bind_ip: ["0.0.0.0"]
Expand Down Expand Up @@ -70,6 +72,12 @@ hss:
# Whether or not to a subscriber to connect to an undefined network when outbound roaming.
allow_undefined_networks: True

# SCTP Socket Parameters
sctp:
rtoMax: 5000
rtoMin: 500
rtoInitial: 1000


api:
page_size: 200
Expand All @@ -95,8 +103,7 @@ logging:
diameter_logging_file: /var/log/pyhss_diameter.log
geored_logging_file: /var/log/pyhss_geored.log
metric_logging_file: /var/log/pyhss_metrics.log
log_to_terminal: True
sqlalchemy_sql_echo: True
sqlalchemy_sql_echo: False
sqlalchemy_pool_recycle: 15
sqlalchemy_pool_size: 30
sqlalchemy_max_overflow: 0
Expand All @@ -113,7 +120,7 @@ database:
webhooks:
enabled: False
endpoints:
- http://127.0.0.1:8181
- 'http://127.0.0.1:8181'

## Geographic Redundancy Parameters
geored:
Expand All @@ -123,13 +130,23 @@ geored:
- 'http://hss01.mnc001.mcc001.3gppnetwork.org:8080'
- 'http://hss02.mnc001.mcc001.3gppnetwork.org:8080'

#Redis is required to run PyHSS. A locally running instance is recommended for production.
#Redis is required to run PyHSS. An instance running on a local network is recommended for production.
redis:
# Whether to use a UNIX socket instead of a tcp connection to redis. Host and port is ignored if useUnixSocket is True.
useUnixSocket: False
# Which connection type to attempt. Valid options are: tcp, unix, sentinel
# tcp - Connection via a standard TCP socket to a given host and port.
# unix - Connect to redis via a unix socket, provided by unixSocketPath.
# sentinel - Connect to one or more redis sentinel hosts.
connectionType: "tcp"
unixSocketPath: '/var/run/redis/redis-server.sock'
host: localhost
port: 6379
sentinel:
masterName: exampleMaster
hosts:
- exampleSentinel.mnc001.mcc001.3gppnetwork.org:
port: 6379
password: ''


prometheus:
enabled: False
Expand Down
241 changes: 221 additions & 20 deletions lib/database.py

Large diffs are not rendered by default.

543 changes: 450 additions & 93 deletions lib/diameter.py

Large diffs are not rendered by default.

5 changes: 3 additions & 2 deletions lib/diameterAsync.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import math
import asyncio
import yaml
import socket
from messagingAsync import RedisMessagingAsync


Expand Down Expand Up @@ -41,7 +42,7 @@ def __init__(self, logTool):
self.redisMessaging = RedisMessagingAsync(host=self.redisHost, port=self.redisPort, useUnixSocket=self.redisUseUnixSocket, unixSocketPath=self.redisUnixSocketPath)

self.logTool = logTool

self.hostname = socket.gethostname()

#Generates rounding for calculating padding
async def myRound(self, n, base=4):
Expand Down Expand Up @@ -246,7 +247,7 @@ async def getConnectedPeersByType(self, peerType: str) -> list:
if peerType not in peerTypes:
return []
filteredConnectedPeers = []
activePeers = await(self.redisMessaging.getValue(key="ActiveDiameterPeers"))
activePeers = await(self.redisMessaging.getValue(key="ActiveDiameterPeers", usePrefix=True, prefixHostname=self.hostname, prefixServiceName='diameter'))

for key, value in activePeers.items():
if activePeers.get(key, {}).get('peerType', '') == 'pgw' and activePeers.get(key, {}).get('connectionStatus', '') == 'connected':
Expand Down
6 changes: 4 additions & 2 deletions lib/logtool.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import logging
import logging.handlers as handlers
import os, sys, time
import socket
from datetime import datetime
sys.path.append(os.path.realpath('../'))
import asyncio
Expand Down Expand Up @@ -41,6 +42,7 @@ def __init__(self, config: dict):

self.redisMessagingAsync = RedisMessagingAsync(host=self.redisHost, port=self.redisPort, useUnixSocket=self.redisUseUnixSocket, unixSocketPath=self.redisUnixSocketPath)
self.redisMessaging = RedisMessaging(host=self.redisHost, port=self.redisPort, useUnixSocket=self.redisUseUnixSocket, unixSocketPath=self.redisUnixSocketPath)
self.hostname = socket.gethostname()

async def logAsync(self, service: str, level: str, message: str, redisClient=None) -> bool:
"""
Expand All @@ -55,7 +57,7 @@ async def logAsync(self, service: str, level: str, message: str, redisClient=Non
timestamp = time.time()
dateTimeString = datetime.fromtimestamp(timestamp).strftime("%m/%d/%Y %H:%M:%S %Z").strip()
print(f"[{dateTimeString}] [{level.upper()}] {message}")
await(redisClient.sendLogMessage(serviceName=service.lower(), logLevel=level, logTimestamp=timestamp, message=message, logExpiry=60))
await(redisClient.sendLogMessage(serviceName=service.lower(), logLevel=level, logTimestamp=timestamp, message=message, logExpiry=60, usePrefix=True, prefixHostname=self.hostname, prefixServiceName='log'))
return True

def log(self, service: str, level: str, message: str, redisClient=None) -> bool:
Expand All @@ -71,7 +73,7 @@ def log(self, service: str, level: str, message: str, redisClient=None) -> bool:
timestamp = time.time()
dateTimeString = datetime.fromtimestamp(timestamp).strftime("%m/%d/%Y %H:%M:%S %Z").strip()
print(f"[{dateTimeString}] [{level.upper()}] {message}")
redisClient.sendLogMessage(serviceName=service.lower(), logLevel=level, logTimestamp=timestamp, message=message, logExpiry=60)
redisClient.sendLogMessage(serviceName=service.lower(), logLevel=level, logTimestamp=timestamp, message=message, logExpiry=60, usePrefix=True, prefixHostname=self.hostname, prefixServiceName='diameter')
return True

def setupFileLogger(self, loggerName: str, logFilePath: str):
Expand Down
81 changes: 51 additions & 30 deletions lib/messaging.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,19 +13,30 @@ def __init__(self, host: str='localhost', port: int=6379, useUnixSocket: bool=Fa
else:
self.redisClient = Redis(host=host, port=port)

def sendMessage(self, queue: str, message: str, queueExpiry: int=None) -> str:
def handlePrefix(self, key: str, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common'):
"""
Adds a prefix to the Key or Queue name, if enabled.
Returns the same Key or Queue if not enabled.
"""
if usePrefix:
return f"{prefixHostname}:{prefixServiceName}:{key}"
else:
return key

def sendMessage(self, queue: str, message: str, queueExpiry: int=None, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> str:
"""
Stores a message in a given Queue (Key).
"""
try:
queue = self.handlePrefix(key=queue, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
self.redisClient.rpush(queue, message)
if queueExpiry is not None:
self.redisClient.expire(queue, queueExpiry)
return f'{message} stored in {queue} successfully.'
except Exception as e:
return ''

def sendMetric(self, serviceName: str, metricName: str, metricType: str, metricAction: str, metricValue: float, metricHelp: str='', metricLabels: list=[], metricTimestamp: int=time.time_ns(), metricExpiry: int=None) -> str:
def sendMetric(self, serviceName: str, metricName: str, metricType: str, metricAction: str, metricValue: float, metricHelp: str='', metricLabels: list=[], metricTimestamp: int=time.time_ns(), metricExpiry: int=None, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> str:
"""
Stores a prometheus metric in a format readable by the metric service.
"""
Expand All @@ -44,35 +55,36 @@ def sendMetric(self, serviceName: str, metricName: str, metricType: str, metricA
}
])

metricQueueName = f"metric"
queue = self.handlePrefix(key='metric', usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)

try:
self.redisClient.rpush(metricQueueName, prometheusMetricBody)
self.redisClient.rpush(queue, prometheusMetricBody)
if metricExpiry is not None:
self.redisClient.expire(metricQueueName, metricExpiry)
self.redisClient.expire(queue, metricExpiry)
return f'Succesfully stored metric called: {metricName}, with value of: {metricType}'
except Exception as e:
return ''

def sendLogMessage(self, serviceName: str, logLevel: str, logTimestamp: int, message: str, logExpiry: int=None) -> str:
def sendLogMessage(self, serviceName: str, logLevel: str, logTimestamp: int, message: str, logExpiry: int=None, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> str:
"""
Stores a message in a given Queue (Key).
"""
try:
logQueueName = f"log"
queue = self.handlePrefix(key='log', usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
logMessage = json.dumps({"message": message, "service": serviceName, "level": logLevel, "timestamp": logTimestamp})
self.redisClient.rpush(logQueueName, logMessage)
self.redisClient.rpush(queue, logMessage)
if logExpiry is not None:
self.redisClient.expire(logQueueName, logExpiry)
return f'{message} stored in {logQueueName} successfully.'
self.redisClient.expire(queue, logExpiry)
return f'{message} stored in {queue} successfully.'
except Exception as e:
return ''

def getMessage(self, queue: str) -> str:
def getMessage(self, queue: str, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> str:
"""
Gets the oldest message from a given Queue (Key), while removing it from the key as well. Deletes the key if the last message is being removed.
"""
try:
queue = self.handlePrefix(key=queue, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
message = self.redisClient.lpop(queue)
if message is None:
message = ''
Expand All @@ -85,101 +97,110 @@ def getMessage(self, queue: str) -> str:
except Exception as e:
return ''

def getQueues(self, pattern: str='*') -> list:
def getQueues(self, pattern: str='*', usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> list:
"""
Returns all Queues (Keys) in the database.
"""
try:
pattern = self.handlePrefix(key=pattern, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
allQueues = self.redisClient.scan_iter(match=pattern)
return [x.decode() for x in allQueues]
except Exception as e:
return f"{traceback.format_exc()}"

def getNextQueue(self, pattern: str='*') -> dict:
def getNextQueue(self, pattern: str='*', usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> dict:
"""
Returns the next Queue (Key) in the list.
"""
try:
pattern = self.handlePrefix(key=pattern, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
for nextQueue in self.redisClient.scan_iter(match=pattern):
return nextQueue.decode()
except Exception as e:
return {}

def awaitMessage(self, key: str):
def awaitMessage(self, key: str, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common'):
"""
Blocks until a message is received at the given key, then returns the message.
"""
try:
key = self.handlePrefix(key=key, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
message = self.redisClient.blpop(key)
return tuple(data.decode() for data in message)
except Exception as e:
return ''

def awaitBulkMessage(self, key: str, count: int=100):
def awaitBulkMessage(self, key: str, count: int=100, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common'):
"""
Blocks until one or more messages are received at the given key, then returns the amount of messages specified by count.
"""
try:
key = self.handlePrefix(key=key, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
message = self.redisClient.blmpop(0, 1, key, direction='RIGHT', count=count)
return message
except Exception as e:
print(traceback.format_exc())
return ''

def deleteQueue(self, queue: str) -> bool:
def deleteQueue(self, queue: str, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> bool:
"""
Deletes the given Queue (Key)
"""
try:
queue = self.handlePrefix(key=queue, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
self.redisClient.delete(queue)
return True
except Exception as e:
return False

def setValue(self, key: str, value: str, keyExpiry: int=None) -> str:
def setValue(self, key: str, value: str, keyExpiry: int=None, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> str:
"""
Stores a value under a given key and sets an expiry (in seconds) if provided.
"""
try:
key = self.handlePrefix(key=key, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
self.redisClient.set(key, value)
if keyExpiry is not None:
self.redisClient.expire(key, keyExpiry)
return f'{value} stored in {key} successfully.'
except Exception as e:
return ''

def getValue(self, key: str) -> str:
def getValue(self, key: str, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> str:
"""
Gets the value stored under a given key.
"""
try:
message = self.redisClient.get(key)
if message is None:
message = ''
else:
return message
key = self.handlePrefix(key=key, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
message = self.redisClient.get(key)
if message is None:
message = ''
else:
return message
except Exception as e:
return ''

def getList(self, key: str) -> list:
def getList(self, key: str, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common') -> list:
"""
Gets the list stored under a given key.
"""
try:
allResults = self.redisClient.lrange(key, 0, -1)
if allResults is None:
result = []
else:
return [result.decode() for result in allResults]
key = self.handlePrefix(key=key, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
allResults = self.redisClient.lrange(key, 0, -1)
if allResults is None:
result = []
else:
return [result.decode() for result in allResults]
except Exception as e:
return []

def RedisHGetAll(self, key: str):
def RedisHGetAll(self, key: str, usePrefix: bool=False, prefixHostname: str='unknown', prefixServiceName: str='common'):
"""
Wrapper for Redis HGETALL
*Deprecated: will be removed upon completed database cleanup.
"""
try:
key = self.handlePrefix(key=key, usePrefix=usePrefix, prefixHostname=prefixHostname, prefixServiceName=prefixServiceName)
data = self.redisClient.hgetall(key)
return data
except Exception as e:
Expand Down
Loading
Loading