artemis/titles/chuni/const.py

157 lines
3.4 KiB
Python
Raw Normal View History

use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
from enum import Enum, IntEnum
2024-06-20 09:11:24 +08:00
2023-03-10 00:38:58 +08:00
class ChuniConstants:
GAME_CODE = "SDBT"
GAME_CODE_NEW = "SDHD"
GAME_CODE_INT = "SDGS"
CONFIG_NAME = "chuni.yaml"
VER_CHUNITHM = 0
VER_CHUNITHM_PLUS = 1
VER_CHUNITHM_AIR = 2
VER_CHUNITHM_AIR_PLUS = 3
2023-03-10 00:38:58 +08:00
VER_CHUNITHM_STAR = 4
VER_CHUNITHM_STAR_PLUS = 5
VER_CHUNITHM_AMAZON = 6
VER_CHUNITHM_AMAZON_PLUS = 7
VER_CHUNITHM_CRYSTAL = 8
VER_CHUNITHM_CRYSTAL_PLUS = 9
VER_CHUNITHM_PARADISE = 10
VER_CHUNITHM_NEW = 11
VER_CHUNITHM_NEW_PLUS = 12
VER_CHUNITHM_SUN = 13
VER_CHUNITHM_SUN_PLUS = 14
2024-06-20 09:11:24 +08:00
VER_CHUNITHM_LUMINOUS = 15
2023-03-10 00:38:58 +08:00
VERSION_NAMES = [
"CHUNITHM",
"CHUNITHM PLUS",
"CHUNITHM AIR",
"CHUNITHM AIR PLUS",
"CHUNITHM STAR",
"CHUNITHM STAR PLUS",
"CHUNITHM AMAZON",
"CHUNITHM AMAZON PLUS",
"CHUNITHM CRYSTAL",
"CHUNITHM CRYSTAL PLUS",
"CHUNITHM PARADISE",
"CHUNITHM NEW!!",
"CHUNITHM NEW PLUS!!",
"CHUNITHM SUN",
2024-06-20 09:11:24 +08:00
"CHUNITHM SUN PLUS",
"CHUNITHM LUMINOUS",
2023-03-10 00:38:58 +08:00
]
SCORE_RANK_INTERVALS_OLD = [
(1007500, "SSS"),
(1000000, "SS"),
( 975000, "S"),
( 950000, "AAA"),
( 925000, "AA"),
( 900000, "A"),
( 800000, "BBB"),
( 700000, "BB"),
( 600000, "B"),
( 500000, "C"),
( 0, "D"),
]
SCORE_RANK_INTERVALS_NEW = [
(1009000, "SSS+"), # New only
(1007500, "SSS"),
(1005000, "SS+"), # New only
(1000000, "SS"),
( 990000, "S+"), # New only
( 975000, "S"),
( 950000, "AAA"),
( 925000, "AA"),
( 900000, "A"),
( 800000, "BBB"),
( 700000, "BB"),
( 600000, "B"),
( 500000, "C"),
( 0, "D"),
]
@classmethod
def game_ver_to_string(cls, ver: int):
2024-06-20 09:11:24 +08:00
return cls.VERSION_NAMES[ver]
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
class MapAreaConditionType(IntEnum):
"""Condition types for the GetGameMapAreaConditionApi endpoint. Incomplete.
For the MAP_CLEARED/MAP_AREA_CLEARED/TROPHY_OBTAINED conditions, the conditionId
is the map/map area/trophy.
For the RANK_*/ALL_JUSTICE conditions, the conditionId is songId * 100 + difficultyId.
For example, Halcyon [ULTIMA] would be 173 * 100 + 4 = 17304.
"""
ALWAYS_UNLOCKED = 0
MAP_CLEARED = 1
MAP_AREA_CLEARED = 2
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
TROPHY_OBTAINED = 3
2024-06-20 09:11:24 +08:00
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
RANK_SSS = 19
RANK_SSP = 20
RANK_SS = 21
RANK_SP = 22
RANK_S = 23
ALL_JUSTICE = 28
2024-06-20 09:11:24 +08:00
class MapAreaConditionLogicalOperator(Enum):
AND = 1
OR = 2
2024-11-04 05:37:27 +08:00
class AvatarCategory(Enum):
WEAR = 1
HEAD = 2
FACE = 3
SKIN = 4
ITEM = 5
FRONT = 6
BACK = 7
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
class ItemKind(IntEnum):
2024-11-04 05:37:27 +08:00
NAMEPLATE = 1
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
FRAME = 2
"""
"Frame" is the background for the gauge/score/max combo display
shown during gameplay. This item cannot be equipped (as of LUMINOUS)
and is hardcoded to the current game's version.
"""
2024-11-04 05:37:27 +08:00
TROPHY = 3
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
SKILL = 4
2024-11-04 05:37:27 +08:00
TICKET = 5
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
"""A statue is also a ticket."""
2024-11-04 05:37:27 +08:00
PRESENT = 6
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
MUSIC_UNLOCK = 7
2024-11-04 05:37:27 +08:00
MAP_ICON = 8
SYSTEM_VOICE = 9
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-17 03:10:29 +08:00
SYMBOL_CHAT = 10
AVATAR_ACCESSORY = 11
ULTIMA_UNLOCK = 12
"""This only applies to ULTIMA difficulties that are *not* unlocked by
SS-ing EXPERT+MASTER.
"""
class FavoriteItemKind(IntEnum):
MUSIC = 1
RIVAL = 2
CHARACTER = 3