Success
#3199 (Mar 18, 2026, 11:43:09 AM)
Started 2 hr 57 min ago
Took 1.2 sec
on build4-deb12build-ansible
This run spent:
| |
| Revision: 354bd4905fa097fe4653a47ca44da1719e876d0f
Repository: https://gerrit.osmocom.org/osmo-ttcn3-hacks
|
s1gw: initial testcases for MME pooling Three test cases covering the MME pool selection logic in OsmoS1GW: * TC_mme_pool_reject_fallback: S1GW falls back to the next pool entry when the first MME rejects S1SetupReq with S1SetupFailure. * TC_mme_pool_timeout_fallback: S1GW falls back when the first MME does not respond to S1SetupReq within the timeout. * TC_mme_pool_all_reject: all pool entries reject S1SetupReq; S1GW must send S1SetupFailure to the eNB and tear down the connection. Infrastructure added to support these tests: * S1AP_Server.ttcn: S1AP_ServerList type; directed register/unregister helpers (f_ConnHdlr_s1ap_register_to / _unregister_from) for use when multiple S1AP_Server_CT instances are active simultaneously. * S1GW_ConnHdlr.ttcn: f_ConnHdlr_s1ap_setup_pool() drives the pool setup sequence: pre-registers with all servers, sends S1SetupReq once (S1GW re-transmits it per-MME), then iterates through the expected behaviors (ACCEPT / REJECT / TIMEOUT) waiting for each server in turn. * S1GW_Tests.ttcn: f_init_s1ap_srv(N) starts N MME emulators on consecutive IP addresses; f_TC_exec_pool() orchestrates pool tests. * osmo-s1gw.config: a 'mme_pool' section with three entries is added alongside the existing sctp_client section. Older OsmoS1GW (without pooling support) will use sctp_client to connect to a single MME and the pool test cases will simply fail, as expected. Newer OsmoS1GW will use mme_pool and all three test cases will pass. Change-Id: Ib8fd62e4352e3055971a669b8b363078bcd95d8d Related: SYS#7052 s1gw: add testcases for impatient eNB during MME pool selection Two new test cases covering scenarios where the eNB disconnects before S1 setup completes, targeting specific states of the enb_proxy FSM: * TC_mme_pool_enb_disc_wait_s1setup_req: eNB connects but disconnects before sending S1SetupReq (enb_proxy in wait_s1setup_req). No MME connection is ever attempted; S1GW must handle the disconnect cleanly. * TC_mme_pool_enb_disc_wait_s1setup_rsp: eNB sends S1SetupReq, S1GW forwards it to the first pool MME (enb_proxy in wait_s1setup_rsp), then eNB disconnects before the response arrives. S1GW must detect the eNB disconnect and close the open MME connection in response. A new helper S1GW_ConnHdlr.f_ConnHdlr_s1ap_close() is added for these tests: unlike f_ConnHdlr_s1ap_disconnect(), it closes the eNB-side socket without waiting for an S1APSRV_EVENT_CONN_DOWN from a pool server (since in these scenarios either no MME connection exists yet, or the CONN_DOWN is captured by the test body directly). Change-Id: I5d27cdafcb9f595a2d3db59beff17cd55de2539e Related: SYS#7052 s1gw: add tests for MME registry REST procedures Add three test cases exercising the S1GW REST interface for MME pool management. The REST TCs are gated on the mp_rest_enable module parameter in the control block. TC_rest_mme_list: query the MME pool list via REST and verify it matches the three static entries from the 'mme_pool' section in osmo-s1gw.config (mme0/mme1/mme2 with their respective addresses). TC_rest_mme_add_del: add a new MME entry at runtime via REST, verify it appears in both the list and individual GET responses, then delete it and confirm it is gone. TC_rest_mme_del_fallback: delete mme0 from the pool at runtime and verify that a connecting eNB is routed directly to mme1, skipping the deleted entry. The pool is restored to its original state afterwards via f_REST_mme_pool_restore(). Also add: * {ts,tr}_MmeItem templates to S1GW_REST_Types.ttcn * f_REST_mme_find(): returns the integer index of a named entry in a MmeList, or -1 if not found; used for both presence and absence checks * f_REST_mme_pool_restore(): deletes all current entries and re-adds mme0/mme1/mme2 in original order to keep pool state predictable across test cases Change-Id: I260bc987ab8ae0ecb547d0b69b261fd97c5c9c23 Related: SYS#7052 s1gw: enable the REST interface, fix wrong REST port REST had been disabled because only nightly builds supported it. The latest stable release (v0.4.0) also supports the REST interface, so let's enable it unconditionally by removing the mp_rest_enable. Also fix the REST port: mp_rest_port was incorrectly set to 8125 (the StatsD port) instead of the actual REST port 8080. Change-Id: I012749076c652ab541e569026eb01c696ad5adc8 Related: SYS#7052, SYS#7066 s1gw: use REST interface to check PFCP assoc state It's quicker to query the IUT using the REST interface rather than waiting for StatsD metric "gauge.pfcp.associated.value" to be received. As a bonus, we "learn" the local/remote RTS from the S1GW, which can be used in new PFCP related testcases. Change-Id: Iec7594e79f533b08ee93b443a39cb9c8ff03da43 s1gw: add README.md Change-Id: Ib5c1326c4260bf552b561a42f7ff9d3f28f89579 |