Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
repo
stringlengths
5
53
instance_id
stringlengths
11
56
base_commit
stringlengths
40
40
patch
stringlengths
339
56.6k
test_patch
stringlengths
0
895k
problem_statement
stringlengths
27
55.6k
hints_text
stringlengths
0
72k
created_at
int64
1,447B
1,739B
labels
sequencelengths
0
7
βŒ€
category
stringclasses
4 values
edit_functions
sequencelengths
1
10
added_functions
sequencelengths
0
32
UXARRAY/uxarray
UXARRAY__uxarray-1117
fe4cae1311db7fb21187b505e06018334a015c48
diff --git a/uxarray/grid/connectivity.py b/uxarray/grid/connectivity.py index 78e936117..54bd1017e 100644 --- a/uxarray/grid/connectivity.py +++ b/uxarray/grid/connectivity.py @@ -146,13 +146,14 @@ def _build_n_nodes_per_face(face_nodes, n_face, n_max_face_nodes): """Constructs ``n_nodes_per_face``, which contains the number of non-fill- value nodes for each face in ``face_node_connectivity``""" - # padding to shape [n_face, n_max_face_nodes + 1] - closed = np.ones((n_face, n_max_face_nodes + 1), dtype=INT_DTYPE) * INT_FILL_VALUE - - closed[:, :-1] = face_nodes.copy() - - n_nodes_per_face = np.argmax(closed == INT_FILL_VALUE, axis=1) - + n_face, n_max_face_nodes = face_nodes.shape + n_nodes_per_face = np.empty(n_face, dtype=INT_DTYPE) + for i in range(n_face): + c = 0 + for j in range(n_max_face_nodes): + if face_nodes[i, j] != INT_FILL_VALUE: + c += 1 + n_nodes_per_face[i] = c return n_nodes_per_face diff --git a/uxarray/grid/coordinates.py b/uxarray/grid/coordinates.py index 45e00ba42..2d78b978a 100644 --- a/uxarray/grid/coordinates.py +++ b/uxarray/grid/coordinates.py @@ -328,23 +328,25 @@ def _construct_face_centroids(node_x, node_y, node_z, face_nodes, n_nodes_per_fa tuple The x, y, and z coordinates of the centroids. """ + centroid_x = np.zeros((face_nodes.shape[0]), dtype=np.float64) centroid_y = np.zeros((face_nodes.shape[0]), dtype=np.float64) centroid_z = np.zeros((face_nodes.shape[0]), dtype=np.float64) - n_face = n_nodes_per_face.shape[0] - - for i_face in prange(n_face): - n_max_nodes = n_nodes_per_face[i_face] - x = np.mean(node_x[face_nodes[i_face, 0:n_max_nodes]]) - y = np.mean(node_y[face_nodes[i_face, 0:n_max_nodes]]) - z = np.mean(node_z[face_nodes[i_face, 0:n_max_nodes]]) + for face_idx in prange(face_nodes.shape[0]): + n_max_nodes = n_nodes_per_face[face_idx] + # Compute Cartesian Average + x = np.mean(node_x[face_nodes[face_idx, 0:n_max_nodes]]) + y = np.mean(node_y[face_nodes[face_idx, 0:n_max_nodes]]) + z = np.mean(node_z[face_nodes[face_idx, 0:n_max_nodes]]) + # Normalize coordinates x, y, z = _normalize_xyz_scalar(x, y, z) + # Store coordinates + centroid_x[face_idx] = x + centroid_y[face_idx] = y + centroid_z[face_idx] = z - centroid_x[i_face] = x - centroid_y[i_face] = y - centroid_z[i_face] = z return centroid_x, centroid_y, centroid_z
Optimize Face Centroid Calculations If `Grid.face_lon` does not exist, `_populate_face_centroids()`, actually `_construct_face_centroids()` in it, takes extremely long for large datasets. For instance, the benchmark/profiling below is for a ~4GB SCREAM dataset, around 5 mins: @rajeeja FYI: I'm already working on this and have gotten optimized results, which will be good for "cartesian" parts of the face center calculations, but you may want to look into the `Welzl` parts as well, i.e. `_populate_face_centerpoints()`. <img width="1065" alt="Image" src="https://github.com/user-attachments/assets/9aba545f-0fdb-4a4c-b2be-b8fb9ffe087e" />
1,734,798,627,000
[ "run-benchmark" ]
Performance Issue
[ "uxarray/grid/connectivity.py:_build_n_nodes_per_face", "uxarray/grid/coordinates.py:_construct_face_centroids" ]
[]
ultralytics/ultralytics
ultralytics__ultralytics-17810
d8c43874ae830a36d2adeac4a44a8ce5697e972c
diff --git a/ultralytics/utils/ops.py b/ultralytics/utils/ops.py index 25e83c61c3a..ac53546ed1b 100644 --- a/ultralytics/utils/ops.py +++ b/ultralytics/utils/ops.py @@ -75,9 +75,8 @@ def segment2box(segment, width=640, height=640): (np.ndarray): the minimum and maximum x and y values of the segment. """ x, y = segment.T # segment xy - inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) - x = x[inside] - y = y[inside] + x = x.clip(0, width) + y = y.clip(0, height) return ( np.array([x.min(), y.min(), x.max(), y.max()], dtype=segment.dtype) if any(x)
Training labels not applied properly to training data ### Search before asking - [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component Train ### Bug # Bug Labels are not included in the generated train_batch**X**.jpg images during training of a segmentation model. Code to reproduce at bottom of section including the example training data. ## Likely cause of bug I am not familiar with how the training label images are generated, however I highly suspect the issue is that if there are no points that define the polygon (label) in the image. This is caused when Yolo performs augmentation such as crop, resize, stretch, etc as it can morph the label such that all points defining the label are outside the image. This causes the mask to encompress up to the entire image but still not be included ### I do not know if this affects anything other than segmentation! ### This may actually affect the training data itself and not just the generated image examples, but I am not sure! ## Examples - All white parts of the images are included in the label, thus if they are unlabelled the bug has occured ![train_batch41](https://github.com/user-attachments/assets/ff8243c4-badb-4ea9-a5c0-64b9c28fbef6) ![train_batch42](https://github.com/user-attachments/assets/17895e1b-a967-4c6d-8a18-39b59962893d) ### Code to reproduce, instuctions in other section [GitIssues.zip](https://github.com/user-attachments/files/17916419/GitIssues.zip) ### Environment ``` Ultralytics 8.3.29 πŸš€ Python-3.10.12 torch-2.4.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4090, 24564MiB) Setup complete βœ… (32 CPUs, 15.5 GB RAM, 23.5/251.0 GB disk) OS Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Environment Linux Python 3.10.12 Install pip RAM 15.47 GB Disk 23.5/251.0 GB CPU 13th Gen Intel Core(TM) i9-13900 CPU count 32 GPU NVIDIA GeForce RTX 4090, 24564MiB GPU count 1 CUDA 12.1 numpy βœ… 2.1.2>=1.23.0 matplotlib βœ… 3.9.2>=3.3.0 opencv-python βœ… 4.10.0.84>=4.6.0 pillow βœ… 10.4.0>=7.1.2 pyyaml βœ… 5.4.1>=5.3.1 requests βœ… 2.32.3>=2.23.0 scipy βœ… 1.14.1>=1.4.1 torch βœ… 2.4.1>=1.8.0 torchvision βœ… 0.19.1>=0.9.0 tqdm βœ… 4.66.5>=4.64.0 psutil βœ… 6.0.0 py-cpuinfo βœ… 9.0.0 pandas βœ… 2.2.3>=1.1.4 seaborn βœ… 0.13.2>=0.11.0 ultralytics-thop βœ… 2.0.11>=2.0.0 numpy βœ… 2.1.2<2.0.0; sys_platform == "darwin" torch βœ… 2.4.1!=2.4.0,>=1.8.0; sys_platform == "win32" ``` ### Minimal Reproducible Example # How to reproduce 1. Download & Extract provided training images, config (.yaml) and test_yolo.py file 2. Edit .yaml file such that the folder path is correct 3. Run test_yolo.py 4. Examine the generated train_batch**X**.jpg images to see if the bug occured (You may need to train more than once) ## What to look for - Any part that is white is labelled, so if any white pixels are unlabelled this bug has occured ### Examples ![train_batch0](https://github.com/user-attachments/assets/fe7f5b3f-1b00-4004-beb1-a50b5d5413b0) - In this case the bottom left image is clearly white, but unlabelled ![train_batch2](https://github.com/user-attachments/assets/25cd0a90-8e46-48e8-ba99-0d15cf620719) - Top right image does has white, but it isn't labelled ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
πŸ‘‹ Hello @TheOfficialOzone, thank you for bringing this to our attention πŸš€! We understand that you're encountering an issue with labels not being applied correctly during the training of a segmentation model on the Ultralytics repository. For us to assist you effectively, please ensure that you've provided a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) if it's not already included in your report. This will help us understand and address the issue more efficiently. It seems like you've already attached a code example and some test images, which is great! πŸ“„ In the meantime, we suggest ensuring all your dependencies are up-to-date. Upgrade to the latest `ultralytics` package, including all requirements, within a Python >=3.8 environment using PyTorch >=1.8 to see if the issue persists: ```bash pip install -U ultralytics ``` For further tips and tricks regarding custom training, please refer to our [Tips for Best Training Results](https://docs.ultralytics.com/guides/model-training-tips/). Join our Ultralytics community for real-time support or discussions: - Head over to [Discord](https://ultralytics.com/discord) for chat support 🎧 - Visit [Discourse](https://community.ultralytics.com) for deeper discussions - Share experiences or get insightful feedback on our [Subreddit](https://reddit.com/r/ultralytics) Finally, an Ultralytics engineer will review the details of your issue soon and follow up with you for additional help. Thank you for your patience and cooperation! ## Environments In case you wish to shift your work to a more verified environment, you might consider: - **Notebooks** with free GPU access: <a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo11"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a> For more details on different environments, please refer to the [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/), [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/), or the [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/). We appreciate your engagement with the Ultralytics repository and hope to resolve your issue soon! 🌟 This issue persists after running `pip install -U ultralytics`. The version that was upgraded to was ultralytics 8.3.37. @TheOfficialOzone Thanks for reporting! I'm able to reproduce this with our dataset. I'll look into it!
1,732,632,265,000
[ "enhancement", "segment" ]
Bug Report
[ "ultralytics/utils/ops.py:segment2box" ]
[]
Chainlit/chainlit
Chainlit__chainlit-1575
8b2d4bacfd4fa2c8af72e2d140d527d20125b07b
diff --git a/backend/chainlit/config.py b/backend/chainlit/config.py index b90f162f07..18ee6be8db 100644 --- a/backend/chainlit/config.py +++ b/backend/chainlit/config.py @@ -311,6 +311,8 @@ class CodeSettings: @dataclass() class ProjectSettings(DataClassJsonMixin): allow_origins: List[str] = Field(default_factory=lambda: ["*"]) + # Socket.io client transports option + transports: Optional[List[str]] = None enable_telemetry: bool = True # List of environment variables to be provided by each user to use the app. If empty, no environment variables will be asked to the user. user_env: Optional[List[str]] = None diff --git a/backend/chainlit/server.py b/backend/chainlit/server.py index 5118f544a7..7aeabe5329 100644 --- a/backend/chainlit/server.py +++ b/backend/chainlit/server.py @@ -301,7 +301,10 @@ def get_html_template(): <meta property="og:url" content="{url}"> <meta property="og:root_path" content="{ROOT_PATH}">""" - js = f"""<script>{f"window.theme = {json.dumps(config.ui.theme.to_dict())}; " if config.ui.theme else ""}</script>""" + js = f"""<script> +{f"window.theme = {json.dumps(config.ui.theme.to_dict())}; " if config.ui.theme else ""} +{f"window.transports = {json.dumps(config.project.transports)}; " if config.project.transports else "undefined"} +</script>""" css = None if config.ui.custom_css: diff --git a/backend/chainlit/socket.py b/backend/chainlit/socket.py index d79c76c16e..5053262e2f 100644 --- a/backend/chainlit/socket.py +++ b/backend/chainlit/socket.py @@ -1,7 +1,6 @@ import asyncio import json import time -import uuid from typing import Any, Dict, Literal from urllib.parse import unquote @@ -77,24 +76,8 @@ def load_user_env(user_env): return user_env -def build_anon_user_identifier(environ): - scope = environ.get("asgi.scope", {}) - client_ip, _ = scope.get("client") - ip = environ.get("HTTP_X_FORWARDED_FOR", client_ip) - - try: - headers = scope.get("headers", {}) - user_agent = next( - (v.decode("utf-8") for k, v in headers if k.decode("utf-8") == "user-agent") - ) - return str(uuid.uuid5(uuid.NAMESPACE_DNS, user_agent + ip)) - - except StopIteration: - return str(uuid.uuid5(uuid.NAMESPACE_DNS, ip)) - - @sio.on("connect") -async def connect(sid, environ): +async def connect(sid, environ, auth): if ( not config.code.on_chat_start and not config.code.on_message @@ -110,8 +93,8 @@ async def connect(sid, environ): try: # Check if the authentication is required if login_required: - authorization_header = environ.get("HTTP_AUTHORIZATION") - token = authorization_header.split(" ")[1] if authorization_header else None + token = auth.get("token") + token = token.split(" ")[1] if token else None user = await get_current_user(token=token) except Exception: logger.info("Authentication failed") @@ -125,16 +108,16 @@ def emit_fn(event, data): def emit_call_fn(event: Literal["ask", "call_fn"], data, timeout): return sio.call(event, data, timeout=timeout, to=sid) - session_id = environ.get("HTTP_X_CHAINLIT_SESSION_ID") + session_id = auth.get("sessionId") if restore_existing_session(sid, session_id, emit_fn, emit_call_fn): return True - user_env_string = environ.get("HTTP_USER_ENV") + user_env_string = auth.get("userEnv") user_env = load_user_env(user_env_string) - client_type = environ.get("HTTP_X_CHAINLIT_CLIENT_TYPE") + client_type = auth.get("clientType") http_referer = environ.get("HTTP_REFERER") - url_encoded_chat_profile = environ.get("HTTP_X_CHAINLIT_CHAT_PROFILE") + url_encoded_chat_profile = auth.get("chatProfile") chat_profile = ( unquote(url_encoded_chat_profile) if url_encoded_chat_profile else None ) @@ -149,7 +132,7 @@ def emit_call_fn(event: Literal["ask", "call_fn"], data, timeout): user=user, token=token, chat_profile=chat_profile, - thread_id=environ.get("HTTP_X_CHAINLIT_THREAD_ID"), + thread_id=auth.get("threadId"), languages=environ.get("HTTP_ACCEPT_LANGUAGE"), http_referer=http_referer, ) @@ -162,13 +145,13 @@ def emit_call_fn(event: Literal["ask", "call_fn"], data, timeout): async def connection_successful(sid): context = init_ws_context(sid) - if context.session.restored: - return - await context.emitter.task_end() await context.emitter.clear("clear_ask") await context.emitter.clear("clear_call_fn") + if context.session.restored: + return + if context.session.thread_id_to_resume and config.code.on_chat_resume: thread = await resume_thread(context.session) if thread: @@ -312,17 +295,13 @@ async def message(sid, payload: MessagePayload): async def window_message(sid, data): """Handle a message send by the host window.""" session = WebsocketSession.require(sid) - context = init_ws_context(session) - - await context.emitter.task_start() + init_ws_context(session) if config.code.on_window_message: try: await config.code.on_window_message(data) except asyncio.CancelledError: pass - finally: - await context.emitter.task_end() @sio.on("audio_start") diff --git a/frontend/src/App.tsx b/frontend/src/App.tsx index cc80e03ac9..9238ca2519 100644 --- a/frontend/src/App.tsx +++ b/frontend/src/App.tsx @@ -42,6 +42,7 @@ declare global { light?: ThemOverride; dark?: ThemOverride; }; + transports?: string[] } } @@ -99,6 +100,7 @@ function App() { return; } else { connect({ + transports: window.transports, userEnv, accessToken }); diff --git a/libs/copilot/src/chat/index.tsx b/libs/copilot/src/chat/index.tsx index 5f0a0779e7..3cc4bd3289 100644 --- a/libs/copilot/src/chat/index.tsx +++ b/libs/copilot/src/chat/index.tsx @@ -12,6 +12,7 @@ export default function ChatWrapper() { useEffect(() => { if (session?.socket?.connected) return; connect({ + transports: window.transports, userEnv: {}, accessToken: `Bearer ${accessToken}` }); diff --git a/libs/react-client/src/useChatSession.ts b/libs/react-client/src/useChatSession.ts index 441e66d665..b1079179f0 100644 --- a/libs/react-client/src/useChatSession.ts +++ b/libs/react-client/src/useChatSession.ts @@ -78,16 +78,18 @@ const useChatSession = () => { // Use currentThreadId as thread id in websocket header useEffect(() => { if (session?.socket) { - session.socket.io.opts.extraHeaders!['X-Chainlit-Thread-Id'] = + session.socket.auth["threadId"] = currentThreadId || ''; } }, [currentThreadId]); const _connect = useCallback( ({ + transports, userEnv, accessToken }: { + transports?: string[] userEnv: Record<string, string>; accessToken?: string; }) => { @@ -100,16 +102,17 @@ const useChatSession = () => { const socket = io(uri, { path, - extraHeaders: { - Authorization: accessToken || '', - 'X-Chainlit-Client-Type': client.type, - 'X-Chainlit-Session-Id': sessionId, - 'X-Chainlit-Thread-Id': idToResume || '', - 'user-env': JSON.stringify(userEnv), - 'X-Chainlit-Chat-Profile': chatProfile - ? encodeURIComponent(chatProfile) - : '' - } + withCredentials: true, + transports, + auth: { + token: accessToken, + clientType: client.type, + sessionId, + threadId: idToResume || '', + userEnv: JSON.stringify(userEnv), + chatProfile: chatProfile ? encodeURIComponent(chatProfile) : '' + } + }); setSession((old) => { old?.socket?.removeAllListeners();
diff --git a/cypress/e2e/copilot/.chainlit/config.toml b/cypress/e2e/copilot/.chainlit/config.toml index e2a93af08f..9c42755715 100644 --- a/cypress/e2e/copilot/.chainlit/config.toml +++ b/cypress/e2e/copilot/.chainlit/config.toml @@ -13,7 +13,7 @@ session_timeout = 3600 cache = false # Authorized origins -allow_origins = ["*"] +allow_origins = ["http://127.0.0.1:8000"] # Follow symlink for asset mount (see https://github.com/Chainlit/chainlit/issues/317) # follow_symlink = false
Security: allowed origins should not be * by default CORS headers should be restricted to the current domain at least, by default.
@dosu Where do we have to look in the settings/code to set this to a sensible/safe default value? <!-- Answer --> To set the allowed origins for CORS headers to a sensible/safe default value, you need to look at the `allow_origins` setting in the `config.toml` file. ```toml # Authorized origins allow_origins = ["*"] ``` You should replace `["*"]` with a list of specific origins that you want to allow. For example: ```toml # Authorized origins allow_origins = ["https://example.com", "https://anotherdomain.com"] ``` This setting can be found in multiple `config.toml` files within the repository, such as: - `cypress/e2e/copilot/.chainlit/config.toml` <sup>[[1]](https://github.com/Chainlit/chainlit/blob/main/cypress/e2e/copilot/.chainlit/config.toml)</sup> - `cypress/e2e/tasklist/.chainlit/config.toml` <sup>[[2]](https://github.com/Chainlit/chainlit/blob/main/cypress/e2e/tasklist/.chainlit/config.toml)</sup> - `cypress/e2e/elements/.chainlit/config.toml` <sup>[[3]](https://github.com/Chainlit/chainlit/blob/main/cypress/e2e/elements/.chainlit/config.toml)</sup> - `cypress/e2e/custom_build/.chainlit/config.toml` <sup>[[4]](https://github.com/Chainlit/chainlit/blob/main/cypress/e2e/custom_build/.chainlit/config.toml)</sup> - `copilot/.chainlit/config.toml` in the Chainlit's cookbook repo <sup>[[5]](https://github.com/Chainlit/cookbook/blob/main/copilot/.chainlit/config.toml)</sup>. Additionally, the `allow_origins` setting can be overridden by the `CHAINLIT_ALLOW_ORIGINS` environment variable <sup>[[6]](https://github.com/Chainlit/chainlit/blob/main/backend/chainlit/config.py)</sup>. <!-- Dosu Comment Footer --> *To continue talking to [Dosu](https://dosu.dev), mention @dosu.* @dosu Maybe we could make the default in the settings file 127.0.0.1 and localhost, then document users to change it for production to whatever website/domain they use? <!-- Answer --> Yes, you can set the default allowed origins in the settings file to `127.0.0.1` and `localhost`, and then document for users to change it for production to their specific domains. You can modify the `config.toml` file as follows: ```toml [project] # Authorized origins allow_origins = ["127.0.0.1", "localhost"] ``` This change can be made in the `config.toml` files located in various directories, such as: - `cypress/e2e/copilot/.chainlit/config.toml` - `cypress/e2e/tasklist/.chainlit/config.toml` - `cypress/e2e/elements/.chainlit/config.toml` - `cypress/e2e/custom_build/.chainlit/config.toml` - `copilot/.chainlit/config.toml` in the Chainlit's cookbook repo <sup>[[1]](https://github.com/Chainlit/chainlit/blob/main/backend/chainlit/config.py)</sup><sup>[[2]](https://github.com/Chainlit/cookbook/blob/main/copilot/.chainlit/config.toml)</sup>. Additionally, you should document this change and instruct users to update the `allow_origins` setting for their production environments to their specific domains. <!-- Dosu Comment Footer --> *To continue talking to [Dosu](https://dosu.dev), mention @dosu.*
1,733,733,602,000
[ "size:M" ]
Security Vulnerability
[ "backend/chainlit/config.py:ProjectSettings", "backend/chainlit/server.py:get_html_template", "backend/chainlit/socket.py:build_anon_user_identifier", "backend/chainlit/socket.py:connect", "backend/chainlit/socket.py:connection_successful", "backend/chainlit/socket.py:window_message" ]
[]
huggingface/transformers
huggingface__transformers-22496
41d47db90fbe9937c0941f2f9cdb2ddd83e49a2e
diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py index 91de6810b17e..96f91a0a43dd 100644 --- a/src/transformers/models/whisper/modeling_whisper.py +++ b/src/transformers/models/whisper/modeling_whisper.py @@ -34,7 +34,12 @@ SequenceClassifierOutput, ) from ...modeling_utils import PreTrainedModel -from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings +from ...utils import ( + add_start_docstrings, + add_start_docstrings_to_model_forward, + logging, + replace_return_docstrings, +) from .configuration_whisper import WhisperConfig from .tokenization_whisper import TASK_IDS, TO_LANGUAGE_CODE @@ -1464,6 +1469,7 @@ def generate( task=None, language=None, is_multilingual=None, + prompt_ids: Optional[torch.Tensor] = None, **kwargs, ): """ @@ -1521,6 +1527,11 @@ def generate( find all the possible language tokens in the `model.generation_config.lang_to_id` dictionary. is_multilingual (`bool`, *optional*): Whether or not the model is multilingual. + prompt_ids (`torch.Tensor`, *optional*): + Rank-1 tensor of token IDs created by passing text to [`~WhisperProcessor.get_prompt_ids`] that is + provided as a prompt to each chunk. This can be used to provide or "prompt-engineer" a context for + transcription, e.g. custom vocabularies or proper nouns to make it more likely to predict those words + correctly. It cannot be used in conjunction with `decoder_start_token_id` as it overwrites this value. kwargs: Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder @@ -1567,8 +1578,21 @@ def generate( if task is not None: generation_config.task = task - forced_decoder_ids = [] - if task is not None or language is not None: + forced_decoder_ids = None + + # Legacy code for backward compatibility + if hasattr(self.config, "forced_decoder_ids") and self.config.forced_decoder_ids is not None: + forced_decoder_ids = self.config.forced_decoder_ids + elif ( + hasattr(self.generation_config, "forced_decoder_ids") + and self.generation_config.forced_decoder_ids is not None + ): + forced_decoder_ids = self.generation_config.forced_decoder_ids + else: + forced_decoder_ids = kwargs.get("forced_decoder_ids", None) + + if task is not None or language is not None or (forced_decoder_ids is None and prompt_ids is not None): + forced_decoder_ids = [] if hasattr(generation_config, "language"): if generation_config.language in generation_config.lang_to_id.keys(): language_token = generation_config.language @@ -1593,27 +1617,48 @@ def generate( raise ValueError( f"The `{generation_config.task}`task is not supported. The task should be one of `{TASK_IDS}`" ) - else: + elif hasattr(generation_config, "task_to_id"): forced_decoder_ids.append((2, generation_config.task_to_id["transcribe"])) # defaults to transcribe if hasattr(generation_config, "no_timestamps_token_id") and not generation_config.return_timestamps: idx = forced_decoder_ids[-1][0] + 1 if forced_decoder_ids else 1 forced_decoder_ids.append((idx, generation_config.no_timestamps_token_id)) - # Legacy code for backward compatibility - elif hasattr(self.config, "forced_decoder_ids") and self.config.forced_decoder_ids is not None: - forced_decoder_ids = self.config.forced_decoder_ids - elif ( - hasattr(self.generation_config, "forced_decoder_ids") - and self.generation_config.forced_decoder_ids is not None - ): - forced_decoder_ids = self.generation_config.forced_decoder_ids + if forced_decoder_ids is not None: + generation_config.forced_decoder_ids = forced_decoder_ids + + if prompt_ids is not None: + if kwargs.get("decoder_start_token_id") is not None: + raise ValueError( + "When specifying `prompt_ids`, you cannot also specify `decoder_start_token_id` as it gets overwritten." + ) + prompt_ids = prompt_ids.tolist() + decoder_start_token_id, *text_prompt_ids = prompt_ids + # Set the decoder_start_token_id to <|startofprev|> + kwargs.update({"decoder_start_token_id": decoder_start_token_id}) + + # Update the max generation length to include the prompt + specified_max_length = kwargs.pop("max_new_tokens", None) or kwargs.pop("max_length", None) + default_max_length = generation_config.max_new_tokens or generation_config.max_length + non_prompt_max_length = specified_max_length or default_max_length + kwargs["max_new_tokens"] = non_prompt_max_length + len(text_prompt_ids) + + # Reformat the forced_decoder_ids to incorporate the prompt + non_prompt_forced_decoder_ids = ( + kwargs.pop("forced_decoder_ids", None) or generation_config.forced_decoder_ids + ) + forced_decoder_ids = [ + # Slicing the text prompt ids in a manner consistent with the OpenAI implementation + # to accomodate context space for the prefix (see https://github.com/openai/whisper/blob/c09a7ae299c4c34c5839a76380ae407e7d785914/whisper/decoding.py#L599) + *text_prompt_ids[-self.config.max_length // 2 - 1 :], + generation_config.decoder_start_token_id, + *[token for _rank, token in non_prompt_forced_decoder_ids], + ] + forced_decoder_ids = [(rank + 1, token) for rank, token in enumerate(forced_decoder_ids)] + generation_config.forced_decoder_ids = forced_decoder_ids if generation_config.return_timestamps: logits_processor = [WhisperTimeStampLogitsProcessor(generation_config)] - if len(forced_decoder_ids) > 0: - generation_config.forced_decoder_ids = forced_decoder_ids - return super().generate( inputs, generation_config, diff --git a/src/transformers/models/whisper/processing_whisper.py b/src/transformers/models/whisper/processing_whisper.py index 8c158b041f7c..b0d0d6c95450 100644 --- a/src/transformers/models/whisper/processing_whisper.py +++ b/src/transformers/models/whisper/processing_whisper.py @@ -16,6 +16,7 @@ Speech processor class for Whisper """ + from ...processing_utils import ProcessorMixin @@ -91,3 +92,6 @@ def decode(self, *args, **kwargs): the docstring of this method for more information. """ return self.tokenizer.decode(*args, **kwargs) + + def get_prompt_ids(self, text: str, return_tensors="np"): + return self.tokenizer.get_prompt_ids(text, return_tensors=return_tensors) diff --git a/src/transformers/models/whisper/tokenization_whisper.py b/src/transformers/models/whisper/tokenization_whisper.py index 24eb72a0b0f9..4c7c9c89fd3b 100644 --- a/src/transformers/models/whisper/tokenization_whisper.py +++ b/src/transformers/models/whisper/tokenization_whisper.py @@ -606,6 +606,11 @@ def _decode( ) -> str: self._decode_use_source_tokenizer = kwargs.pop("use_source_tokenizer", False) + if skip_special_tokens: + prompt_token_id = self.convert_tokens_to_ids("<|startofprev|>") + decoder_start_token_id = self.convert_tokens_to_ids("<|startoftranscript|>") + token_ids = self._strip_prompt(token_ids, prompt_token_id, decoder_start_token_id) + filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens) # To avoid mixing byte-level and unicode for byte-level BPT @@ -714,6 +719,31 @@ def _decode_asr(self, model_outputs, *, return_timestamps, return_language, time time_precision=time_precision, ) + def get_prompt_ids(self, text: str, return_tensors="np"): + """Converts prompt text to IDs that can be passed to [`~WhisperForConditionalGeneration.generate`].""" + batch_encoding = self("<|startofprev|>", text.strip(), add_prefix_space=True, add_special_tokens=False) + + # Check for special tokens + prompt_text_ids = batch_encoding["input_ids"][1:] + special_token_id = next((x for x in prompt_text_ids if x >= self.all_special_ids[0]), None) + if special_token_id is not None: + token = self.convert_ids_to_tokens(special_token_id) + raise ValueError(f"Encountered text in the prompt corresponding to disallowed special token: {token}.") + + batch_encoding.convert_to_tensors(tensor_type=return_tensors) + return batch_encoding["input_ids"] + + @staticmethod + def _strip_prompt(token_ids: List[int], prompt_token_id: int, decoder_start_token_id: int): + has_prompt = isinstance(token_ids, list) and token_ids and token_ids[0] == prompt_token_id + if has_prompt: + if decoder_start_token_id in token_ids: + return token_ids[token_ids.index(decoder_start_token_id) :] + else: + return [] + + return token_ids + def _decode_asr(tokenizer, model_outputs, *, return_timestamps, return_language, time_precision): """ diff --git a/src/transformers/models/whisper/tokenization_whisper_fast.py b/src/transformers/models/whisper/tokenization_whisper_fast.py index fb1bf89ed606..be4ad842a7f6 100644 --- a/src/transformers/models/whisper/tokenization_whisper_fast.py +++ b/src/transformers/models/whisper/tokenization_whisper_fast.py @@ -312,6 +312,11 @@ def decode( return text def _decode(self, *args, normalize: bool = False, **kwargs) -> str: + if kwargs["skip_special_tokens"]: + prompt_token_id = self.convert_tokens_to_ids("<|startofprev|>") + decoder_start_token_id = self.convert_tokens_to_ids("<|startoftranscript|>") + kwargs["token_ids"] = self._strip_prompt(kwargs["token_ids"], prompt_token_id, decoder_start_token_id) + text = super()._decode(*args, **kwargs) if normalize: @@ -485,3 +490,30 @@ def _decode_asr(self, model_outputs, *, return_timestamps, return_language, time return_language=return_language, time_precision=time_precision, ) + + # Copied from transformers.models.whisper.tokenization_whisper.WhisperTokenizer.get_prompt_ids + def get_prompt_ids(self, text: str, return_tensors="np"): + """Converts prompt text to IDs that can be passed to [`~WhisperForConditionalGeneration.generate`].""" + batch_encoding = self("<|startofprev|>", text.strip(), add_prefix_space=True, add_special_tokens=False) + + # Check for special tokens + prompt_text_ids = batch_encoding["input_ids"][1:] + special_token_id = next((x for x in prompt_text_ids if x >= self.all_special_ids[0]), None) + if special_token_id is not None: + token = self.convert_ids_to_tokens(special_token_id) + raise ValueError(f"Encountered text in the prompt corresponding to disallowed special token: {token}.") + + batch_encoding.convert_to_tensors(tensor_type=return_tensors) + return batch_encoding["input_ids"] + + @staticmethod + # Copied from transformers.models.whisper.tokenization_whisper.WhisperTokenizer._strip_prompt + def _strip_prompt(token_ids: List[int], prompt_token_id: int, decoder_start_token_id: int): + has_prompt = isinstance(token_ids, list) and token_ids and token_ids[0] == prompt_token_id + if has_prompt: + if decoder_start_token_id in token_ids: + return token_ids[token_ids.index(decoder_start_token_id) :] + else: + return [] + + return token_ids
diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py index 883a2021b9bb..98bbbb3214a7 100644 --- a/tests/models/whisper/test_modeling_whisper.py +++ b/tests/models/whisper/test_modeling_whisper.py @@ -1013,6 +1013,48 @@ def test_mask_time_prob(self): encoder_last_hidden_state = model(**input_dict).encoder_last_hidden_state self.assertTrue(encoder_last_hidden_state.shape, (13, 30, 16)) + def test_generate_with_prompt_ids_and_task_and_language(self): + config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() + model = WhisperForConditionalGeneration(config).eval().to(torch_device) + input_features = input_dict["input_features"] + prompt_ids = np.arange(5) + language = "<|de|>" + task = "translate" + lang_id = 6 + task_id = 7 + model.generation_config.__setattr__("lang_to_id", {language: lang_id}) + model.generation_config.__setattr__("task_to_id", {task: task_id}) + + output = model.generate(input_features, max_new_tokens=5, task=task, language=language, prompt_ids=prompt_ids) + + expected_output_start = [ + *prompt_ids.tolist(), + model.generation_config.decoder_start_token_id, + lang_id, + task_id, + ] + for row in output.tolist(): + self.assertListEqual(row[: len(expected_output_start)], expected_output_start) + + def test_generate_with_prompt_ids_and_forced_decoder_ids(self): + config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() + model = WhisperForConditionalGeneration(config).eval().to(torch_device) + input_features = input_dict["input_features"] + prompt_ids = np.asarray(range(5)) + forced_decoder_ids = [(1, 6), (2, 7), (3, 8)] + + output = model.generate( + input_features, max_new_tokens=5, forced_decoder_ids=forced_decoder_ids, prompt_ids=prompt_ids + ) + + expected_output_start = [ + *prompt_ids.tolist(), + model.generation_config.decoder_start_token_id, + *[token for _rank, token in forced_decoder_ids], + ] + for row in output.tolist(): + self.assertListEqual(row[: len(expected_output_start)], expected_output_start) + @require_torch @require_torchaudio @@ -1429,6 +1471,60 @@ def test_tiny_specaugment_librispeech(self): # fmt: on self.assertTrue(torch.allclose(logits[0][0, 0, :30].cpu(), EXPECTED_LOGITS, atol=1e-4)) + @slow + def test_generate_with_prompt_ids(self): + processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") + model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") + model.to(torch_device) + input_speech = self._load_datasamples(4)[-1:] + input_features = processor(input_speech, return_tensors="pt").input_features + + output_without_prompt = model.generate(input_features) + prompt_ids = processor.get_prompt_ids("Leighton") + output_with_prompt = model.generate(input_features, prompt_ids=prompt_ids) + + expected_without_prompt = "<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|endoftext|>" + expected_with_prompt = "<|startofprev|> Leighton<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Leighton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|endoftext|>" + self.assertEqual(processor.decode(output_without_prompt[0]), expected_without_prompt) + self.assertEqual(processor.decode(output_with_prompt[0]), expected_with_prompt) + + @slow + def test_generate_with_prompt_ids_and_forced_decoder_ids(self): + processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") + model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") + model.to(torch_device) + input_speech = self._load_datasamples(1) + input_features = processor(input_speech, return_tensors="pt").input_features + task = "translate" + language = "de" + expected_tokens = [f"<|{task}|>", f"<|{language}|>"] + prompt = "test prompt" + prompt_ids = processor.get_prompt_ids(prompt) + + output = model.generate(input_features, task=task, language=language, prompt_ids=prompt_ids) + text = processor.decode(output[0]) + + self.assertTrue(prompt in text) + self.assertTrue(all([token in text for token in expected_tokens])) + + @slow + def test_generate_with_prompt_ids_and_no_non_prompt_forced_decoder_ids(self): + processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en") + model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") + model.to(torch_device) + input_speech = self._load_datasamples(1) + input_features = processor(input_speech, return_tensors="pt").input_features + prompt = "test prompt" + prompt_ids = processor.get_prompt_ids(prompt) + + model.generation_config.forced_decoder_ids = None + model.config.forced_decoder_ids = None + + output = model.generate(input_features, prompt_ids=prompt_ids, return_timestamps=True) + text = processor.decode(output[0]) + + self.assertTrue(prompt in text) + def prepare_whisper_encoder_inputs_dict(config, input_features, head_mask=None): if head_mask is None: diff --git a/tests/models/whisper/test_processor_whisper.py b/tests/models/whisper/test_processor_whisper.py index b844d433ed33..e96f4260e94c 100644 --- a/tests/models/whisper/test_processor_whisper.py +++ b/tests/models/whisper/test_processor_whisper.py @@ -16,6 +16,8 @@ import tempfile import unittest +import pytest + from transformers import WhisperTokenizer, is_speech_available from transformers.testing_utils import require_sentencepiece, require_torch, require_torchaudio @@ -146,3 +148,32 @@ def test_get_decoder_prompt_ids(self): expected_ids = [TRANSCRIBE, NOTIMESTAMPS] self.assertListEqual([ids[-1] for ids in forced_decoder_ids], expected_ids) + + def test_get_prompt_ids(self): + processor = WhisperProcessor(tokenizer=self.get_tokenizer(), feature_extractor=self.get_feature_extractor()) + prompt_ids = processor.get_prompt_ids("Mr. Quilter") + decoded_prompt = processor.tokenizer.decode(prompt_ids) + + self.assertListEqual(prompt_ids.tolist(), [50360, 1770, 13, 2264, 346, 353]) + self.assertEqual(decoded_prompt, "<|startofprev|> Mr. Quilter") + + def test_empty_get_prompt_ids(self): + processor = WhisperProcessor(tokenizer=self.get_tokenizer(), feature_extractor=self.get_feature_extractor()) + prompt_ids = processor.get_prompt_ids("") + decoded_prompt = processor.tokenizer.decode(prompt_ids) + + self.assertListEqual(prompt_ids.tolist(), [50360, 220]) + self.assertEqual(decoded_prompt, "<|startofprev|> ") + + def test_get_prompt_ids_with_special_tokens(self): + processor = WhisperProcessor(tokenizer=self.get_tokenizer(), feature_extractor=self.get_feature_extractor()) + + def _test_prompt_error_raised_helper(prompt, special_token): + with pytest.raises(ValueError) as excinfo: + processor.get_prompt_ids(prompt) + expected = f"Encountered text in the prompt corresponding to disallowed special token: {special_token}." + self.assertEqual(expected, str(excinfo.value)) + + _test_prompt_error_raised_helper("<|startofprev|> test", "<|startofprev|>") + _test_prompt_error_raised_helper("test <|notimestamps|>", "<|notimestamps|>") + _test_prompt_error_raised_helper("test <|zh|> test <|transcribe|>", "<|zh|>") diff --git a/tests/models/whisper/test_tokenization_whisper.py b/tests/models/whisper/test_tokenization_whisper.py index 9ceef149fab9..5022d29b730e 100644 --- a/tests/models/whisper/test_tokenization_whisper.py +++ b/tests/models/whisper/test_tokenization_whisper.py @@ -194,6 +194,25 @@ def test_find_longest_common_subsequence(self): merge = _find_longest_common_sequence([seq1, seq2, seq3]) self.assertEqual(merge, [1, 2, 3, 4, 5, 6, 7, 8]) + def test_skip_special_tokens_skips_prompt_ids(self): + tokenizer = self.get_tokenizer() + rust_tokenizer = self.get_rust_tokenizer() + # fmt: off + encoded_input = [ + 50361, 2221, 13, 2326, 388, 391, 50258, 50259, 50359, + 50363, 1282, 264, 2674, 9156, 295, 1523, 11, 2221, 13, + 2326, 388, 391, 13657, 365, 2681, 21296, 17711, 13, 50257, + ] + # fmt: on + expected_with_special_tokens = "<|startofprev|> Mr. Quilter<|startoftranscript|><|en|><|transcribe|><|notimestamps|> On the general principles of art, Mr. Quilter writes with equal lucidity.<|endoftext|>" + expected_without_special_tokens = " On the general principles of art, Mr. Quilter writes with equal lucidity." + self.assertEqual(tokenizer.decode(encoded_input, skip_special_tokens=False), expected_with_special_tokens) + self.assertEqual(tokenizer.decode(encoded_input, skip_special_tokens=True), expected_without_special_tokens) + self.assertEqual(rust_tokenizer.decode(encoded_input, skip_special_tokens=False), expected_with_special_tokens) + self.assertEqual( + rust_tokenizer.decode(encoded_input, skip_special_tokens=True), expected_without_special_tokens + ) + class SpeechToTextTokenizerMultilinguialTest(unittest.TestCase): checkpoint_name = "openai/whisper-small.en"
Whisper Prompting ### Feature request Add prompting for the Whisper model to control the style/formatting of the generated text. ### Motivation During training, Whisper can be fed a "previous context window" to condition on longer passages of text. The original OpenAI Whisper implementation provides the user with the option of passing an [`initial_prompt`](https://github.com/openai/whisper/blob/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258/whisper/transcribe.py#L96) to the model. This prompt is replaces the "previous context window" during inference. By passing the prompt as the "previous context window", the Whisper model conditions its generation on whatever text is passed as the prompt. This allows the user to control aspects of the generation, such as spellings of named entities and punctuation formatting (see https://github.com/openai/whisper/discussions/963#discussioncomment-4987057). This is possibly a cheaper way of adapting the Whisper model to specific decoding constraints than fine-tuning. This notebook demonstrates prompting with the initial codebase, and explains how this can be achieved for HF's Whisper: https://colab.research.google.com/drive/14FSeaoRvgs5arOTfiMQBnQ5NaLyma7Tq?usp=sharing The proposed API for prompting would look something as follows: 1. Encode prompt text to prompt token ids (`processor.get_prompt_ids`) - this method is a wrapper around `processor.tokenizer.__call__` that **doesn't** add the special token ids: ```python prompt = "IR, Newswire" prompt_ids = processor.get_prompt_ids(prompt) ``` 2. Pass the input audio and prompt token ids to the `.generate` method to get the predicted ids: ```python pred_ids = model.generate(input_features, prompt_ids=prompt_ids) ``` 3. Decode the predicted ids and 'slice' off the prompt (we can do this by passing the `prompt_ids`): ```python pred_str = processor.batch_decode(pred_ids, prompt_ids=prompt_ids) ``` => We would need to wrap all of this `forced_decoder_ids` logic into the generate method and update the processor/tokenizer accordingly. ### Your contribution Happy to guide the integration and review any PRs!
cc @hollance Hello, I'd like to pick up this issue!
1,680,278,096,000
[]
Feature Request
[ "src/transformers/models/whisper/modeling_whisper.py:WhisperForConditionalGeneration.generate", "src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._decode", "src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer", "src/transformers/models/whisper/tokenization_whisper_fast.py:WhisperTokenizerFast._decode", "src/transformers/models/whisper/tokenization_whisper_fast.py:WhisperTokenizerFast" ]
[ "src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer.get_prompt_ids", "src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._strip_prompt", "src/transformers/models/whisper/tokenization_whisper_fast.py:WhisperTokenizerFast.get_prompt_ids", "src/transformers/models/whisper/tokenization_whisper_fast.py:WhisperTokenizerFast._strip_prompt" ]
scikit-learn/scikit-learn
scikit-learn__scikit-learn-24145
55af30d981ea2f72346ff93602f0b3b740cfe8d6
diff --git a/doc/whats_new/v1.3.rst b/doc/whats_new/v1.3.rst index 9cab0db995c5d..ec1301844b877 100644 --- a/doc/whats_new/v1.3.rst +++ b/doc/whats_new/v1.3.rst @@ -487,6 +487,11 @@ Changelog categorical encoding based on target mean conditioned on the value of the category. :pr:`25334` by `Thomas Fan`_. +- |Enhancement| A new parameter `sparse_output` was added to + :class:`SplineTransformer`, available as of SciPy 1.8. If `sparse_output=True`, + :class:`SplineTransformer` returns a sparse CSR matrix. + :pr:`24145` by :user:`Christian Lorentzen <lorentzenchr>`. + - |Enhancement| Adds a `feature_name_combiner` parameter to :class:`preprocessing.OneHotEncoder`. This specifies a custom callable to create feature names to be returned by :meth:`get_feature_names_out`. diff --git a/sklearn/preprocessing/_polynomial.py b/sklearn/preprocessing/_polynomial.py index 64ecb9864fae0..f379ee9135706 100644 --- a/sklearn/preprocessing/_polynomial.py +++ b/sklearn/preprocessing/_polynomial.py @@ -13,11 +13,11 @@ from ..base import BaseEstimator, TransformerMixin from ..utils import check_array +from ..utils.fixes import sp_version, parse_version from ..utils.validation import check_is_fitted, FLOAT_DTYPES, _check_sample_weight from ..utils.validation import _check_feature_names_in from ..utils._param_validation import Interval, StrOptions from ..utils.stats import _weighted_percentile -from ..utils.fixes import sp_version, parse_version from ._csr_polynomial_expansion import ( _csr_polynomial_expansion, @@ -574,8 +574,6 @@ def transform(self, X): return XP -# TODO: -# - sparse support (either scipy or own cython solution)? class SplineTransformer(TransformerMixin, BaseEstimator): """Generate univariate B-spline bases for features. @@ -635,8 +633,14 @@ class SplineTransformer(TransformerMixin, BaseEstimator): i.e. a column of ones. It acts as an intercept term in a linear models. order : {'C', 'F'}, default='C' - Order of output array. 'F' order is faster to compute, but may slow - down subsequent estimators. + Order of output array in the dense case. `'F'` order is faster to compute, but + may slow down subsequent estimators. + + sparse_output : bool, default=False + Will return sparse CSR matrix if set True else will return an array. This + option is only available with `scipy>=1.8`. + + .. versionadded:: 1.2 Attributes ---------- @@ -699,6 +703,7 @@ class SplineTransformer(TransformerMixin, BaseEstimator): ], "include_bias": ["boolean"], "order": [StrOptions({"C", "F"})], + "sparse_output": ["boolean"], } def __init__( @@ -710,6 +715,7 @@ def __init__( extrapolation="constant", include_bias=True, order="C", + sparse_output=False, ): self.n_knots = n_knots self.degree = degree @@ -717,6 +723,7 @@ def __init__( self.extrapolation = extrapolation self.include_bias = include_bias self.order = order + self.sparse_output = sparse_output @staticmethod def _get_base_knot_positions(X, n_knots=10, knots="uniform", sample_weight=None): @@ -843,6 +850,12 @@ def fit(self, X, y=None, sample_weight=None): elif not np.all(np.diff(base_knots, axis=0) > 0): raise ValueError("knots must be sorted without duplicates.") + if self.sparse_output and sp_version < parse_version("1.8.0"): + raise ValueError( + "Option sparse_output=True is only available with scipy>=1.8.0, " + f"but here scipy=={sp_version} is used." + ) + # number of knots for base interval n_knots = base_knots.shape[0] @@ -934,7 +947,7 @@ def transform(self, X): Returns ------- - XBS : ndarray of shape (n_samples, n_features * n_splines) + XBS : {ndarray, sparse matrix} of shape (n_samples, n_features * n_splines) The matrix of features, where n_splines is the number of bases elements of the B-splines, n_knots + degree - 1. """ @@ -946,6 +959,19 @@ def transform(self, X): n_splines = self.bsplines_[0].c.shape[1] degree = self.degree + # TODO: Remove this condition, once scipy 1.10 is the minimum version. + # Only scipy => 1.10 supports design_matrix(.., extrapolate=..). + # The default (implicit in scipy < 1.10) is extrapolate=False. + scipy_1_10 = sp_version >= parse_version("1.10.0") + # Note: self.bsplines_[0].extrapolate is True for extrapolation in + # ["periodic", "continue"] + if scipy_1_10: + use_sparse = self.sparse_output + kwargs_extrapolate = {"extrapolate": self.bsplines_[0].extrapolate} + else: + use_sparse = self.sparse_output and not self.bsplines_[0].extrapolate + kwargs_extrapolate = dict() + # Note that scipy BSpline returns float64 arrays and converts input # x=X[:, i] to c-contiguous float64. n_out = self.n_features_out_ + n_features * (1 - self.include_bias) @@ -953,7 +979,10 @@ def transform(self, X): dtype = X.dtype else: dtype = np.float64 - XBS = np.zeros((n_samples, n_out), dtype=dtype, order=self.order) + if use_sparse: + output_list = [] + else: + XBS = np.zeros((n_samples, n_out), dtype=dtype, order=self.order) for i in range(n_features): spl = self.bsplines_[i] @@ -972,20 +1001,53 @@ def transform(self, X): else: x = X[:, i] - XBS[:, (i * n_splines) : ((i + 1) * n_splines)] = spl(x) - - else: - xmin = spl.t[degree] - xmax = spl.t[-degree - 1] + if use_sparse: + XBS_sparse = BSpline.design_matrix( + x, spl.t, spl.k, **kwargs_extrapolate + ) + if self.extrapolation == "periodic": + # See the construction of coef in fit. We need to add the last + # degree spline basis function to the first degree ones and + # then drop the last ones. + # Note: See comment about SparseEfficiencyWarning below. + XBS_sparse = XBS_sparse.tolil() + XBS_sparse[:, :degree] += XBS_sparse[:, -degree:] + XBS_sparse = XBS_sparse[:, :-degree] + else: + XBS[:, (i * n_splines) : ((i + 1) * n_splines)] = spl(x) + else: # extrapolation in ("constant", "linear") + xmin, xmax = spl.t[degree], spl.t[-degree - 1] + # spline values at boundaries + f_min, f_max = spl(xmin), spl(xmax) mask = (xmin <= X[:, i]) & (X[:, i] <= xmax) - XBS[mask, (i * n_splines) : ((i + 1) * n_splines)] = spl(X[mask, i]) + if use_sparse: + mask_inv = ~mask + x = X[:, i].copy() + # Set some arbitrary values outside boundary that will be reassigned + # later. + x[mask_inv] = spl.t[self.degree] + XBS_sparse = BSpline.design_matrix(x, spl.t, spl.k) + # Note: Without converting to lil_matrix we would get: + # scipy.sparse._base.SparseEfficiencyWarning: Changing the sparsity + # structure of a csr_matrix is expensive. lil_matrix is more + # efficient. + if np.any(mask_inv): + XBS_sparse = XBS_sparse.tolil() + XBS_sparse[mask_inv, :] = 0 + else: + XBS[mask, (i * n_splines) : ((i + 1) * n_splines)] = spl(X[mask, i]) # Note for extrapolation: # 'continue' is already returned as is by scipy BSplines if self.extrapolation == "error": # BSpline with extrapolate=False does not raise an error, but - # output np.nan. - if np.any(np.isnan(XBS[:, (i * n_splines) : ((i + 1) * n_splines)])): + # outputs np.nan. + if (use_sparse and np.any(np.isnan(XBS_sparse.data))) or ( + not use_sparse + and np.any( + np.isnan(XBS[:, (i * n_splines) : ((i + 1) * n_splines)]) + ) + ): raise ValueError( "X contains values beyond the limits of the knots." ) @@ -995,21 +1057,29 @@ def transform(self, X): # Only the first degree and last degree number of splines # have non-zero values at the boundaries. - # spline values at boundaries - f_min = spl(xmin) - f_max = spl(xmax) mask = X[:, i] < xmin if np.any(mask): - XBS[mask, (i * n_splines) : (i * n_splines + degree)] = f_min[ - :degree - ] + if use_sparse: + # Note: See comment about SparseEfficiencyWarning above. + XBS_sparse = XBS_sparse.tolil() + XBS_sparse[mask, :degree] = f_min[:degree] + + else: + XBS[mask, (i * n_splines) : (i * n_splines + degree)] = f_min[ + :degree + ] mask = X[:, i] > xmax if np.any(mask): - XBS[ - mask, - ((i + 1) * n_splines - degree) : ((i + 1) * n_splines), - ] = f_max[-degree:] + if use_sparse: + # Note: See comment about SparseEfficiencyWarning above. + XBS_sparse = XBS_sparse.tolil() + XBS_sparse[mask, -degree:] = f_max[-degree:] + else: + XBS[ + mask, + ((i + 1) * n_splines - degree) : ((i + 1) * n_splines), + ] = f_max[-degree:] elif self.extrapolation == "linear": # Continue the degree first and degree last spline bases @@ -1018,8 +1088,6 @@ def transform(self, X): # Note that all others have derivative = value = 0 at the # boundaries. - # spline values at boundaries - f_min, f_max = spl(xmin), spl(xmax) # spline derivatives = slopes at boundaries fp_min, fp_max = spl(xmin, nu=1), spl(xmax, nu=1) # Compute the linear continuation. @@ -1030,16 +1098,57 @@ def transform(self, X): for j in range(degree): mask = X[:, i] < xmin if np.any(mask): - XBS[mask, i * n_splines + j] = ( - f_min[j] + (X[mask, i] - xmin) * fp_min[j] - ) + linear_extr = f_min[j] + (X[mask, i] - xmin) * fp_min[j] + if use_sparse: + # Note: See comment about SparseEfficiencyWarning above. + XBS_sparse = XBS_sparse.tolil() + XBS_sparse[mask, j] = linear_extr + else: + XBS[mask, i * n_splines + j] = linear_extr mask = X[:, i] > xmax if np.any(mask): k = n_splines - 1 - j - XBS[mask, i * n_splines + k] = ( - f_max[k] + (X[mask, i] - xmax) * fp_max[k] - ) + linear_extr = f_max[k] + (X[mask, i] - xmax) * fp_max[k] + if use_sparse: + # Note: See comment about SparseEfficiencyWarning above. + XBS_sparse = XBS_sparse.tolil() + XBS_sparse[mask, k : k + 1] = linear_extr[:, None] + else: + XBS[mask, i * n_splines + k] = linear_extr + + if use_sparse: + if not sparse.isspmatrix_csr(XBS_sparse): + XBS_sparse = XBS_sparse.tocsr() + output_list.append(XBS_sparse) + + if use_sparse: + # TODO: Remove this conditional error when the minimum supported version of + # SciPy is 1.9.2 + # `scipy.sparse.hstack` breaks in scipy<1.9.2 + # when `n_features_out_ > max_int32` + max_int32 = np.iinfo(np.int32).max + all_int32 = True + for mat in output_list: + all_int32 &= mat.indices.dtype == np.int32 + if ( + sp_version < parse_version("1.9.2") + and self.n_features_out_ > max_int32 + and all_int32 + ): + raise ValueError( + "In scipy versions `<1.9.2`, the function `scipy.sparse.hstack`" + " produces negative columns when:\n1. The output shape contains" + " `n_cols` too large to be represented by a 32bit signed" + " integer.\n. All sub-matrices to be stacked have indices of" + " dtype `np.int32`.\nTo avoid this error, either use a version" + " of scipy `>=1.9.2` or alter the `SplineTransformer`" + " transformer to produce fewer than 2^31 output features" + ) + XBS = sparse.hstack(output_list) + elif self.sparse_output: + # TODO: Remove ones scipy 1.10 is the minimum version. See comments above. + XBS = sparse.csr_matrix(XBS) if self.include_bias: return XBS
diff --git a/sklearn/preprocessing/tests/test_polynomial.py b/sklearn/preprocessing/tests/test_polynomial.py index 727b31b793b1d..1062a3da820e7 100644 --- a/sklearn/preprocessing/tests/test_polynomial.py +++ b/sklearn/preprocessing/tests/test_polynomial.py @@ -35,6 +35,22 @@ def is_c_contiguous(a): assert np.isfortran(est(order="F").fit_transform(X)) +@pytest.mark.parametrize( + "params, err_msg", + [ + ({"knots": [[1]]}, r"Number of knots, knots.shape\[0\], must be >= 2."), + ({"knots": [[1, 1], [2, 2]]}, r"knots.shape\[1\] == n_features is violated"), + ({"knots": [[1], [0]]}, "knots must be sorted without duplicates."), + ], +) +def test_spline_transformer_input_validation(params, err_msg): + """Test that we raise errors for invalid input in SplineTransformer.""" + X = [[1], [2]] + + with pytest.raises(ValueError, match=err_msg): + SplineTransformer(**params).fit(X) + + @pytest.mark.parametrize("extrapolation", ["continue", "periodic"]) def test_spline_transformer_integer_knots(extrapolation): """Test that SplineTransformer accepts integer value knot positions.""" @@ -109,8 +125,7 @@ def test_split_transform_feature_names_extrapolation_degree(extrapolation, degre def test_spline_transformer_unity_decomposition(degree, n_knots, knots, extrapolation): """Test that B-splines are indeed a decomposition of unity. - Splines basis functions must sum up to 1 per row, if we stay in between - boundaries. + Splines basis functions must sum up to 1 per row, if we stay in between boundaries. """ X = np.linspace(0, 1, 100)[:, None] # make the boundaries 0 and 1 part of X_train, for sure. @@ -178,8 +193,7 @@ def test_spline_transformer_linear_regression(bias, intercept): def test_spline_transformer_get_base_knot_positions( knots, n_knots, sample_weight, expected_knots ): - # Check the behaviour to find the positions of the knots with and without - # `sample_weight` + """Check the behaviour to find knot positions with and without sample_weight.""" X = np.array([[0, 2], [0, 2], [2, 2], [3, 3], [4, 6], [5, 8], [6, 14]]) base_knots = SplineTransformer._get_base_knot_positions( X=X, knots=knots, n_knots=n_knots, sample_weight=sample_weight @@ -238,9 +252,7 @@ def test_spline_transformer_periodic_spline_backport(): def test_spline_transformer_periodic_splines_periodicity(): - """ - Test if shifted knots result in the same transformation up to permutation. - """ + """Test if shifted knots result in the same transformation up to permutation.""" X = np.linspace(0, 10, 101)[:, None] transformer_1 = SplineTransformer( @@ -349,9 +361,10 @@ def test_spline_transformer_extrapolation(bias, intercept, degree): n_knots=4, degree=degree, include_bias=bias, extrapolation="error" ) splt.fit(X) - with pytest.raises(ValueError): + msg = "X contains values beyond the limits of the knots" + with pytest.raises(ValueError, match=msg): splt.transform([[-10]]) - with pytest.raises(ValueError): + with pytest.raises(ValueError, match=msg): splt.transform([[5]]) @@ -375,12 +388,94 @@ def test_spline_transformer_kbindiscretizer(): assert_allclose(splines, kbins, rtol=1e-13) +@pytest.mark.skipif( + sp_version < parse_version("1.8.0"), + reason="The option `sparse_output` is available as of scipy 1.8.0", +) +@pytest.mark.parametrize("degree", range(1, 3)) +@pytest.mark.parametrize("knots", ["uniform", "quantile"]) +@pytest.mark.parametrize( + "extrapolation", ["error", "constant", "linear", "continue", "periodic"] +) +@pytest.mark.parametrize("include_bias", [False, True]) +def test_spline_transformer_sparse_output( + degree, knots, extrapolation, include_bias, global_random_seed +): + rng = np.random.RandomState(global_random_seed) + X = rng.randn(200).reshape(40, 5) + + splt_dense = SplineTransformer( + degree=degree, + knots=knots, + extrapolation=extrapolation, + include_bias=include_bias, + sparse_output=False, + ) + splt_sparse = SplineTransformer( + degree=degree, + knots=knots, + extrapolation=extrapolation, + include_bias=include_bias, + sparse_output=True, + ) + + splt_dense.fit(X) + splt_sparse.fit(X) + + assert sparse.isspmatrix_csr(splt_sparse.transform(X)) + assert_allclose(splt_dense.transform(X), splt_sparse.transform(X).toarray()) + + # extrapolation regime + X_min = np.amin(X, axis=0) + X_max = np.amax(X, axis=0) + X_extra = np.r_[ + np.linspace(X_min - 5, X_min, 10), np.linspace(X_max, X_max + 5, 10) + ] + if extrapolation == "error": + msg = "X contains values beyond the limits of the knots" + with pytest.raises(ValueError, match=msg): + splt_dense.transform(X_extra) + msg = "Out of bounds" + with pytest.raises(ValueError, match=msg): + splt_sparse.transform(X_extra) + else: + assert_allclose( + splt_dense.transform(X_extra), splt_sparse.transform(X_extra).toarray() + ) + + +@pytest.mark.skipif( + sp_version >= parse_version("1.8.0"), + reason="The option `sparse_output` is available as of scipy 1.8.0", +) +def test_spline_transformer_sparse_output_raise_error_for_old_scipy(): + """Test that SplineTransformer with sparse=True raises for scipy<1.8.0.""" + X = [[1], [2]] + with pytest.raises(ValueError, match="scipy>=1.8.0"): + SplineTransformer(sparse_output=True).fit(X) + + @pytest.mark.parametrize("n_knots", [5, 10]) @pytest.mark.parametrize("include_bias", [True, False]) -@pytest.mark.parametrize("degree", [3, 5]) -def test_spline_transformer_n_features_out(n_knots, include_bias, degree): +@pytest.mark.parametrize("degree", [3, 4]) +@pytest.mark.parametrize( + "extrapolation", ["error", "constant", "linear", "continue", "periodic"] +) +@pytest.mark.parametrize("sparse_output", [False, True]) +def test_spline_transformer_n_features_out( + n_knots, include_bias, degree, extrapolation, sparse_output +): """Test that transform results in n_features_out_ features.""" - splt = SplineTransformer(n_knots=n_knots, degree=degree, include_bias=include_bias) + if sparse_output and sp_version < parse_version("1.8.0"): + pytest.skip("The option `sparse_output` is available as of scipy 1.8.0") + + splt = SplineTransformer( + n_knots=n_knots, + degree=degree, + include_bias=include_bias, + extrapolation=extrapolation, + sparse_output=sparse_output, + ) X = np.linspace(0, 1, 10)[:, None] splt.fit(X)
Add sparse matrix output to SplineTransformer ### Describe the workflow you want to enable As B-splines naturally have a sparse structure, I'd like to have the option that `SplineTransformer` returns a sparse matrix instead of always an ndarray. ```python import numpy as np from sklearn.preprocessing import SplineTransformer X = np.arange(6).reshape(6, 1) spline = SplineTransformer(degree=2, n_knots=3, sparse=True) spline.fit_transform(X) ``` ### Describe your proposed solution With scipy >= 1.8 (yet to be released), we can use `design_matrix` from https://github.com/scipy/scipy/pull/14344. ### Describe alternatives you've considered, if relevant _No response_ ### Additional context _No response_
1,659,969,522,000
[ "module:preprocessing" ]
Feature Request
[ "sklearn/preprocessing/_polynomial.py:SplineTransformer", "sklearn/preprocessing/_polynomial.py:SplineTransformer.__init__", "sklearn/preprocessing/_polynomial.py:SplineTransformer.fit", "sklearn/preprocessing/_polynomial.py:SplineTransformer.transform" ]
[]
avantifellows/quiz-backend
avantifellows__quiz-backend-84
f970b54634a9a9ba000aaf76d05338a5d77b0d60
diff --git a/app/models.py b/app/models.py index cfb9644b..80f94b94 100644 --- a/app/models.py +++ b/app/models.py @@ -365,6 +365,11 @@ class Config: schema_extra = {"example": {"answer": [0, 1, 2], "visited": True}} +""" +Note : The below model is not being used currently anywhere +""" + + class SessionAnswerResponse(SessionAnswer): """Model for the response of any request that returns a session answer""" diff --git a/app/routers/session_answers.py b/app/routers/session_answers.py index 9d11e4a6..cd3f4937 100644 --- a/app/routers/session_answers.py +++ b/app/routers/session_answers.py @@ -2,68 +2,82 @@ from fastapi.responses import JSONResponse from fastapi.encoders import jsonable_encoder from database import client -from models import SessionAnswerResponse, UpdateSessionAnswer +from models import UpdateSessionAnswer from utils import remove_optional_unset_args router = APIRouter(prefix="/session_answers", tags=["Session Answers"]) -@router.patch("/{session_answer_id}", response_model=SessionAnswerResponse) -async def update_session_answer( - session_answer_id: str, session_answer: UpdateSessionAnswer +@router.patch("/{session_id}/{position_index}", response_model=None) +async def update_session_answer_in_a_session( + session_id: str, position_index: int, session_answer: UpdateSessionAnswer ): + """ + Update a session answer in a session by its position index in the session answers array + Path Params: + session_id - the id of the session + position_index - the position index of the session answer in the session answers array. This corresponds to the position of the question in the quiz + """ session_answer = remove_optional_unset_args(session_answer) session_answer = jsonable_encoder(session_answer) - if (client.quiz.session_answers.find_one({"_id": session_answer_id})) is None: + # check if the session exists + session = client.quiz.sessions.find_one({"_id": session_id}) + if session is None: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, - detail=f"session_answer {session_answer_id} not found", + detail=f"Provided session with id {session_id} not found", ) - # update the document in the session_answers collection - client.quiz.session_answers.update_one( - {"_id": session_answer_id}, {"$set": session_answer} - ) - - updated_session_answer = client.quiz.session_answers.find_one( - {"_id": session_answer_id} - ) + # check if the session has session answers key + if "session_answers" not in session or session["session_answers"] is None: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail=f"No session answers found in the session with id {session_id}", + ) - # update the document in the sessions collection if this answer - # is present in the subset of session answers we store in the document - # corresponding to the session - session_to_update = client.quiz.sessions.find_one( - {"_id": updated_session_answer["session_id"]} - ) + # check if the session answer index that we're trying to access is out of bounds or not + if position_index > len(session["session_answers"]): + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Provided position index {position_index} is out of bounds of length of the session answers array", + ) - session_answers = list(session_to_update["session_answers"]) - update_session = False - for index, _ in enumerate(session_answers): - if session_answers[index]["_id"] == session_answer_id: - session_answers[index].update(session_answer) - update_session = True - break + # constructing the $set query for mongodb + setQuery = {} + for key, value in session_answer.items(): + setQuery[f"session_answers.{position_index}.{key}"] = value - if update_session: - client.quiz.sessions.update_one( - {"_id": session_to_update["_id"]}, - {"$set": {"session_answers": session_answers}}, - ) + # update the document in the session_answers collection + client.quiz.sessions.update_one({"_id": session_id}, {"$set": setQuery}) - return JSONResponse(status_code=status.HTTP_200_OK, content=updated_session_answer) + return JSONResponse(status_code=status.HTTP_200_OK) -@router.get("/{session_answer_id}", response_model=SessionAnswerResponse) -async def get_session_answer(session_answer_id: str): - if ( - session_answer := client.quiz.session_answers.find_one( - {"_id": session_answer_id} +@router.get("/{session_id}/{position_index}", response_model=None) +async def get_session_answer_from_a_session(session_id: str, position_index: int): + pipeline = [ + { + "$match": { # match the session with the provided session_id + "_id": session_id + } + }, + { + "$project": { # project the required element from session_answers array + "_id": 0, + "session_answer": { + "$arrayElemAt": ["$session_answers", position_index] + }, + } + }, + ] + aggregation_result = list(client.quiz.sessions.aggregate(pipeline)) + if len(aggregation_result) == 0: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="Either session_id is wrong or position_index is out of bounds", ) - ) is not None: - return session_answer - raise HTTPException( - status_code=status.HTTP_404_NOT_FOUND, - detail=f"session_answer {session_answer_id} not found", + return JSONResponse( + status_code=status.HTTP_200_OK, content=aggregation_result[0]["session_answer"] ) diff --git a/app/routers/sessions.py b/app/routers/sessions.py index 34f07615..6b14efca 100644 --- a/app/routers/sessions.py +++ b/app/routers/sessions.py @@ -100,17 +100,13 @@ async def create_session(session: Session): current_session["has_quiz_ended"] = last_session.get("has_quiz_ended", False) # restore the answers from the last (previous) sessions - last_session_answers = list( - client.quiz.session_answers.find( - {"session_id": last_session["_id"]}, - sort=[("_id", pymongo.ASCENDING)], - ) - ) + session_answers_of_the_last_session = last_session["session_answers"] - for index, session_answer in enumerate(last_session_answers): + for _, session_answer in enumerate(session_answers_of_the_last_session): # note: we retain created_at key in session_answer for key in ["_id", "session_id"]: - session_answer.pop(key) + if key in session_answer: + session_answer.pop(key) # append with new session_answer "_id" keys session_answers.append( @@ -120,17 +116,10 @@ async def create_session(session: Session): current_session["session_answers"] = session_answers # insert current session into db - new_session = client.quiz.sessions.insert_one(current_session) - created_session = client.quiz.sessions.find_one({"_id": new_session.inserted_id}) - - # update with new session_id and insert to db - for index, _ in enumerate(session_answers): - session_answers[index]["session_id"] = new_session.inserted_id - - client.quiz.session_answers.insert_many(session_answers) + client.quiz.sessions.insert_one(current_session) # return the created session - return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_session) + return JSONResponse(status_code=status.HTTP_201_CREATED, content=current_session) @router.patch("/{session_id}", response_model=UpdateSessionResponse) @@ -143,6 +132,7 @@ async def update_session(session_id: str, session_updates: UpdateSession): * dummy event logic added for JNV -- will be removed! """ new_event = jsonable_encoder(session_updates)["event"] + session_update_query = {} # if new_event == EventType.dummy_event: # return JSONResponse( @@ -159,8 +149,16 @@ async def update_session(session_id: str, session_updates: UpdateSession): event_obj = jsonable_encoder(Event.parse_obj({"event_type": new_event})) if session["events"] is None: session["events"] = [event_obj] + if "$set" not in session_update_query: + session_update_query["$set"] = {"events": [event_obj]} + else: + session_update_query["$set"].update({"events": [event_obj]}) else: session["events"].append(event_obj) + if "$push" not in session_update_query: + session_update_query["$push"] = {"events": event_obj} + else: + session_update_query["$push"].update({"events": event_obj}) # diff between times of last two events time_elapsed = 0 @@ -212,15 +210,21 @@ async def update_session(session_id: str, session_updates: UpdateSession): ): # if `time_remaining` key is not present => # no time limit is set, no need to respond with time_remaining - session["time_remaining"] = max(0, session["time_remaining"] - time_elapsed) - response_content = {"time_remaining": session["time_remaining"]} + time_remaining = max(0, session["time_remaining"] - time_elapsed) + if "$set" not in session_update_query: + session_update_query["$set"] = {"time_remaining": time_remaining} + else: + session_update_query["$set"].update({"time_remaining": time_remaining}) + response_content = {"time_remaining": time_remaining} # update the document in the sessions collection if new_event == EventType.end_quiz: - session["has_quiz_ended"] = True - client.quiz.sessions.update_one( - {"_id": session_id}, {"$set": jsonable_encoder(session)} - ) + if "$set" not in session_update_query: + session_update_query["$set"] = {"has_quiz_ended": True} + else: + session_update_query["$set"].update({"has_quiz_ended": True}) + + client.quiz.sessions.update_one({"_id": session_id}, session_update_query) return JSONResponse(status_code=status.HTTP_200_OK, content=response_content)
diff --git a/app/tests/test_session_answers.py b/app/tests/test_session_answers.py index a2c04a1e..2d05a9be 100644 --- a/app/tests/test_session_answers.py +++ b/app/tests/test_session_answers.py @@ -7,12 +7,13 @@ class SessionAnswerTestCase(SessionsBaseTestCase): def setUp(self): super().setUp() self.session_answers = self.homework_session["session_answers"] + self.session_id = self.homework_session["_id"] + self.session_answer_position_index = 0 self.session_answer = self.session_answers[0] - self.session_answer_id = self.session_answer["_id"] - def test_gets_session_answer_with_valid_id(self): + def test_gets_session_answer_from_a_session(self): response = self.client.get( - f"{session_answers.router.prefix}/{self.session_answer_id}" + f"{session_answers.router.prefix}/{self.session_id}/{self.session_answer_position_index}" ) assert response.status_code == 200 session_answer = json.loads(response.content) @@ -22,12 +23,12 @@ def test_gets_session_answer_with_valid_id(self): def test_update_session_answer_with_only_answer(self): new_answer = [0, 1, 2] response = self.client.patch( - f"{session_answers.router.prefix}/{self.session_answer_id}", + f"{session_answers.router.prefix}/{self.session_id}/{self.session_answer_position_index}", json={"answer": new_answer}, ) assert response.status_code == 200 response = self.client.get( - f"{session_answers.router.prefix}/{self.session_answer_id}" + f"{session_answers.router.prefix}/{self.session_id}/{self.session_answer_position_index}" ) session_answer = json.loads(response.content) @@ -40,12 +41,12 @@ def test_update_session_answer_with_only_answer(self): def test_update_session_answer_with_only_visited(self): new_visited = True response = self.client.patch( - f"{session_answers.router.prefix}/{self.session_answer_id}", + f"{session_answers.router.prefix}/{self.session_id}/{self.session_answer_position_index}", json={"visited": new_visited}, ) assert response.status_code == 200 response = self.client.get( - f"{session_answers.router.prefix}/{self.session_answer_id}" + f"{session_answers.router.prefix}/{self.session_id}/{self.session_answer_position_index}" ) session_answer = json.loads(response.content) diff --git a/app/tests/test_sessions.py b/app/tests/test_sessions.py index 66105d14..531db98b 100644 --- a/app/tests/test_sessions.py +++ b/app/tests/test_sessions.py @@ -96,12 +96,14 @@ def test_create_session_with_previous_session_and_start_event(self): assert response["is_first"] is False def test_create_session_with_valid_quiz_id_and_previous_session(self): + self.session_id = self.homework_session["_id"] self.session_answers = self.homework_session["session_answers"] + self.session_answer_position_index = 0 self.session_answer = self.session_answers[0] self.session_answer_id = self.session_answer["_id"] new_answer = [0, 1, 2] response = self.client.patch( - f"{session_answers.router.prefix}/{self.session_answer_id}", + f"{session_answers.router.prefix}/{self.session_id}/{self.session_answer_position_index}", json={"answer": new_answer}, ) response = self.client.post( @@ -214,8 +216,5 @@ def test_time_remaining_in_new_session_with_quiz_resume(self): f"{sessions.router.prefix}/{resumed_session_id}" ).json() - # because time has passed between both quizzes - assert ( - json.loads(response.content)["time_remaining"] < quiz["time_limit"]["max"] - ) + # because time has passed between both sessions assert updated_resumed_session["time_remaining"] < quiz["time_limit"]["max"]
At some places we're updating just one key of an object or one element of an array but we send the whole object to MongoDB to update which is inefficient.
1,680,002,295,000
[]
Performance Issue
[ "app/routers/session_answers.py:update_session_answer", "app/routers/session_answers.py:get_session_answer", "app/routers/sessions.py:create_session", "app/routers/sessions.py:update_session" ]
[ "app/routers/session_answers.py:update_session_answer_in_a_session", "app/routers/session_answers.py:get_session_answer_from_a_session" ]
internetarchive/openlibrary
internetarchive__openlibrary-7929
dc49fddb78a3cb25138922790ddd6a5dd2b5741c
diff --git a/openlibrary/core/lending.py b/openlibrary/core/lending.py index 6162ed5b081..d7e2a1949cb 100644 --- a/openlibrary/core/lending.py +++ b/openlibrary/core/lending.py @@ -511,13 +511,53 @@ def _get_ia_loan(identifier, userid): def get_loans_of_user(user_key): """TODO: Remove inclusion of local data; should only come from IA""" + if 'env' not in web.ctx: + """For the get_cached_user_loans to call the API if no cache is present, + we have to fakeload the web.ctx + """ + delegate.fakeload() + account = OpenLibraryAccount.get(username=user_key.split('/')[-1]) loandata = web.ctx.site.store.values(type='/type/loan', name='user', value=user_key) loans = [Loan(d) for d in loandata] + (_get_ia_loans_of_user(account.itemname)) + # Set patron's loans in cache w/ now timestamp + get_cached_loans_of_user.memcache_set( + [user_key], {}, loans or [], time.time() + ) # rehydrate cache return loans +get_cached_loans_of_user = cache.memcache_memoize( + get_loans_of_user, + key_prefix='lending.cached_loans', + timeout=5 * dateutil.MINUTE_SECS, # time to live for cached loans = 5 minutes +) + + +def get_user_waiting_loans(user_key): + """Gets the waitingloans of the patron. + + Returns [] if user has no waitingloans. + """ + from .waitinglist import WaitingLoan + + account = OpenLibraryAccount.get(key=user_key) + itemname = account.itemname + result = WaitingLoan.query(userid=itemname) + get_cached_user_waiting_loans.memcache_set( + [user_key], {}, result or {}, time.time() + ) # rehydrate cache + return result or [] + + +get_cached_user_waiting_loans = cache.memcache_memoize( + get_user_waiting_loans, + key_prefix='waitinglist.user_waiting_loans', + timeout=10 * dateutil.MINUTE_SECS, +) + + def _get_ia_loans_of_user(userid): ia_loans = ia_lending_api.find_loans(userid=userid) return [Loan.from_ia_loan(d) for d in ia_loans] diff --git a/openlibrary/core/models.py b/openlibrary/core/models.py index d582db128c4..b4b3c1cd2a5 100644 --- a/openlibrary/core/models.py +++ b/openlibrary/core/models.py @@ -16,6 +16,7 @@ # TODO: fix this. openlibrary.core should not import plugins. from openlibrary import accounts +from openlibrary.core import lending from openlibrary.catalog import add_book from openlibrary.core.booknotes import Booknotes from openlibrary.core.bookshelves import Bookshelves @@ -24,7 +25,7 @@ from openlibrary.core.observations import Observations from openlibrary.core.ratings import Ratings from openlibrary.core.vendors import create_edition_from_amazon_metadata -from openlibrary.utils import extract_numeric_id_from_olid +from openlibrary.utils import extract_numeric_id_from_olid, dateutil from openlibrary.utils.isbn import to_isbn_13, isbn_13_to_isbn_10, canonical # Seed might look unused, but removing it causes an error :/ @@ -916,25 +917,51 @@ def has_borrowed(self, book): loan = self.get_loan_for(book.ocaid) return loan is not None - def get_loan_for(self, ocaid): + def get_loan_for(self, ocaid, use_cache=False): """Returns the loan object for given ocaid. Returns None if this user hasn't borrowed the given book. """ from ..plugins.upstream import borrow - loans = borrow.get_loans(self) + loans = ( + lending.get_cached_loans_of_user(self.key) + if use_cache + else lending.get_loans_of_user(self.key) + ) for loan in loans: if ocaid == loan['ocaid']: return loan def get_waiting_loan_for(self, ocaid): """ - :param str or None ocaid: + :param str or None ocaid: edition ocaid :rtype: dict (e.g. {position: number}) """ return ocaid and WaitingLoan.find(self.key, ocaid) + def get_user_waiting_loans(self, ocaid=None, use_cache=False): + """ + Similar to get_waiting_loan_for, but fetches and caches all of user's waiting loans + :param str or None ocaid: edition ocaid + :rtype: dict (e.g. {position: number}) + """ + all_user_waiting_loans = ( + lending.get_cached_user_waiting_loans + if use_cache + else lending.get_user_waiting_loans + )(self.key) + if ocaid: + return next( + ( + loan + for loan in all_user_waiting_loans + if loan['identifier'] == ocaid + ), + None, + ) + return all_user_waiting_loans + def __repr__(self): return "<User: %s>" % repr(self.key) diff --git a/openlibrary/core/waitinglist.py b/openlibrary/core/waitinglist.py index 5381e4614b6..207bae4da44 100644 --- a/openlibrary/core/waitinglist.py +++ b/openlibrary/core/waitinglist.py @@ -16,6 +16,7 @@ import logging import web from openlibrary.accounts.model import OpenLibraryAccount +from openlibrary.core import cache from . import helpers as h from .sendmail import sendmail_with_template from . import db diff --git a/openlibrary/macros/LoanStatus.html b/openlibrary/macros/LoanStatus.html index b9649006622..7c6427a7378 100644 --- a/openlibrary/macros/LoanStatus.html +++ b/openlibrary/macros/LoanStatus.html @@ -17,7 +17,7 @@ $ work_key = work_key or (doc.get('works') and doc.works[0].key) $ waiting_loan_start_time = time() -$ waiting_loan = check_loan_status and ocaid and ctx.user and ctx.user.get_waiting_loan_for(ocaid) +$ waiting_loan = check_loan_status and ocaid and ctx.user and ctx.user.get_user_waiting_loans(ocaid, use_cache=True) $ waiting_loan_total_time = time() - waiting_loan_start_time $ my_turn_to_borrow = waiting_loan and waiting_loan['status'] == 'available' and waiting_loan['position'] == 1 @@ -34,7 +34,7 @@ $# Checks to see if patron has actively loan / waitlist for this book $ get_loan_for_start_time = time() -$ user_loan = doc.get('loan') or (check_loan_status and ocaid and ctx.user and ctx.user.get_loan_for(ocaid)) +$ user_loan = doc.get('loan') or (check_loan_status and ocaid and ctx.user and ctx.user.get_loan_for(ocaid, use_cache=True)) $ get_loan_for_total_time = time() - get_loan_for_start_time $ is_edition = doc.key.split('/')[1] == 'books' diff --git a/openlibrary/plugins/upstream/borrow.py b/openlibrary/plugins/upstream/borrow.py index f48c1656115..9e9827046c7 100644 --- a/openlibrary/plugins/upstream/borrow.py +++ b/openlibrary/plugins/upstream/borrow.py @@ -149,6 +149,9 @@ def POST(self, key): account = OpenLibraryAccount.get_by_email(user.email) ia_itemname = account.itemname if account else None s3_keys = web.ctx.site.store.get(account._key).get('s3_keys') + lending.get_cached_loans_of_user.memcache_delete( + user.key, {} + ) # invalidate cache for user loans if not user or not ia_itemname or not s3_keys: web.setcookie(config.login_cookie_name, "", expires=-1) redirect_url = ( @@ -165,10 +168,16 @@ def POST(self, key): user.update_loan_status() raise web.seeother(edition_redirect) elif action == 'join-waitinglist': + lending.get_cached_user_waiting_loans.memcache_delete( + user.key, {} + ) # invalidate cache for user waiting loans lending.s3_loan_api(s3_keys, ocaid=edition.ocaid, action='join_waitlist') stats.increment('ol.loans.joinWaitlist') raise web.redirect(edition_redirect) elif action == 'leave-waitinglist': + lending.get_cached_user_waiting_loans.memcache_delete( + user.key, {} + ) # invalidate cache for user waiting loans lending.s3_loan_api(s3_keys, ocaid=edition.ocaid, action='leave_waitlist') stats.increment('ol.loans.leaveWaitlist') raise web.redirect(edition_redirect)
Cache Patron's Active Loans <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> On several pages (e.g. LoanStatus) we fetch the patron's active loans (which can be expensive) to see if they've borrowed a book (e.g. on the book page). Ideally, we'd cache this every 5 minutes and invalidate within the /borrow endpoint. The one "gotcha" is if a patron borrows a book from archive.org, Open Library doesn't know how to bust the cache, but that should be okay because presumably this will be fixed within the 5 minute window. Exploration required into whether cache actually works for this and how performance is impacted.
1,685,797,378,000
[ "Priority: 1", "Needs: Patch Deploy" ]
Performance Issue
[ "openlibrary/core/lending.py:get_loans_of_user", "openlibrary/core/models.py:User.get_loan_for", "openlibrary/core/models.py:User.get_waiting_loan_for", "openlibrary/plugins/upstream/borrow.py:borrow.POST" ]
[ "openlibrary/core/lending.py:get_user_waiting_loans", "openlibrary/core/models.py:User.get_user_waiting_loans" ]
rwth-i6/sisyphus
rwth-i6__sisyphus-191
a5ddfaa5257beafb5fdce28d96e6ae1e574ee9fe
diff --git a/sisyphus/aws_batch_engine.py b/sisyphus/aws_batch_engine.py index 4b0173f..80f454e 100644 --- a/sisyphus/aws_batch_engine.py +++ b/sisyphus/aws_batch_engine.py @@ -1,4 +1,4 @@ -""" This is an experimental implementation for the aws batch engine. +"""This is an experimental implementation for the aws batch engine. WARNING: After running some setups I can currently not recommend using aws batch with Sisyphus. AWS parallelcluster (https://aws.amazon.com/blogs/opensource/aws-parallelcluster/) looks like a easy way how @@ -88,10 +88,13 @@ def system_call(self, command, send_to_stdin=None): system_command = command logging.debug("shell_cmd: %s" % " ".join(system_command)) - p = subprocess.Popen(system_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if send_to_stdin: send_to_stdin = send_to_stdin.encode() - out, err = p.communicate(input=send_to_stdin, timeout=30) + try: + p = subprocess.run(system_command, input=send_to_stdin, capture_output=True, timeout=30) + except subprocess.TimeoutExpired: + logging.warning("Timeout expired for command: %s" % " ".join(system_command)) + return [], ["TimeoutExpired"], -1 def fix_output(o): """ @@ -105,9 +108,9 @@ def fix_output(o): assert False return o[:-1] - out = fix_output(out) - err = fix_output(err) - retval = p.wait(timeout=30) + out = fix_output(p.stdout) + err = fix_output(p.stderr) + retval = p.returncode return out, err, retval diff --git a/sisyphus/load_sharing_facility_engine.py b/sisyphus/load_sharing_facility_engine.py index 2b63318..5a24ad1 100644 --- a/sisyphus/load_sharing_facility_engine.py +++ b/sisyphus/load_sharing_facility_engine.py @@ -56,10 +56,13 @@ def system_call(self, command, send_to_stdin=None): system_command = command logging.debug("shell_cmd: %s" % " ".join(system_command)) - p = subprocess.Popen(system_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if send_to_stdin: send_to_stdin = send_to_stdin.encode() - out, err = p.communicate(input=send_to_stdin, timeout=30) + try: + p = subprocess.run(system_command, input=send_to_stdin, capture_output=True, timeout=30) + except subprocess.TimeoutExpired: + logging.warning("Timeout expired for command: %s" % " ".join(system_command)) + return [], ["TimeoutExpired"], -1 def fix_output(o): # split output and drop last empty line @@ -69,9 +72,9 @@ def fix_output(o): assert False return o[:-1] - out = fix_output(out) - err = fix_output(err) - retval = p.wait(timeout=30) + out = fix_output(p.stdout) + err = fix_output(p.stderr) + retval = p.returncode # Check for ssh error err_ = [] diff --git a/sisyphus/simple_linux_utility_for_resource_management_engine.py b/sisyphus/simple_linux_utility_for_resource_management_engine.py index a918928..48ad8db 100644 --- a/sisyphus/simple_linux_utility_for_resource_management_engine.py +++ b/sisyphus/simple_linux_utility_for_resource_management_engine.py @@ -86,16 +86,21 @@ def system_call(self, command, send_to_stdin=None): """ if self.gateway: escaped_command = [shlex.quote(s) for s in command] # parameters need to be shell safe when sending via ssh - system_command = ["ssh", "-x", self.gateway] + [" ".join(["cd", os.getcwd(), "&&"] + escaped_command)] + system_command = ["ssh", "-x", self.gateway, "-o", "BatchMode=yes"] + [ + " ".join(["cd", os.getcwd(), "&&"] + escaped_command) + ] else: # no gateway given, skip ssh local system_command = command logging.debug("shell_cmd: %s" % " ".join(system_command)) - p = subprocess.Popen(system_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if send_to_stdin: send_to_stdin = send_to_stdin.encode() - out, err = p.communicate(input=send_to_stdin, timeout=30) + try: + p = subprocess.run(system_command, input=send_to_stdin, capture_output=True, timeout=30) + except subprocess.TimeoutExpired: + logging.warning("Timeout expired for command: %s" % " ".join(system_command)) + return [], ["TimeoutExpired"], -1 def fix_output(o): """ @@ -109,9 +114,9 @@ def fix_output(o): assert False return o[:-1] - out = fix_output(out) - err = fix_output(err) - retval = p.wait(timeout=30) + out = fix_output(p.stdout) + err = fix_output(p.stderr) + retval = p.returncode # Check for ssh error err_ = [] diff --git a/sisyphus/son_of_grid_engine.py b/sisyphus/son_of_grid_engine.py index acda4f0..6682aba 100644 --- a/sisyphus/son_of_grid_engine.py +++ b/sisyphus/son_of_grid_engine.py @@ -87,16 +87,21 @@ def system_call(self, command, send_to_stdin=None): :rtype: list[bytes], list[bytes], int """ if self.gateway: - system_command = ["ssh", "-x", self.gateway] + [" ".join(["cd", os.getcwd(), "&&"] + command)] + system_command = ["ssh", "-x", self.gateway, "-o", "BatchMode=yes"] + [ + " ".join(["cd", os.getcwd(), "&&"] + command) + ] else: # no gateway given, skip ssh local system_command = command logging.debug("shell_cmd: %s" % " ".join(system_command)) - p = subprocess.Popen(system_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if send_to_stdin: send_to_stdin = send_to_stdin.encode() - out, err = p.communicate(input=send_to_stdin, timeout=30) + try: + p = subprocess.run(system_command, input=send_to_stdin, capture_output=True, timeout=30) + except subprocess.TimeoutExpired: + logging.warning("Timeout expired for command: %s" % " ".join(system_command)) + return [], ["TimeoutExpired"], -1 def fix_output(o): """ @@ -110,9 +115,9 @@ def fix_output(o): assert False return o[:-1] - out = fix_output(out) - err = fix_output(err) - retval = p.wait(timeout=30) + out = fix_output(p.stdout) + err = fix_output(p.stderr) + retval = p.returncode # Check for ssh error err_ = []
Too many open file descriptors Hi, I was using sisyphus today for a big recipe and I got an error in my worker which claimed `too many open files`: ``` OSError: [Errno 24] Unable to synchronously open file (unable to open file: name = <filename>, errno = 24, error message = 'Too many open files', flags = 0, o_flags = 0) ``` However, not only had the worker crashed, but also the manager crashed with this error. Moreover, the terminal (tmux pane) entered a state where every character I typed was converted into a newline character. As a result, I couldn't type any instruction, so I ended up killing the tmux pane and recreating it. I got into an investigation of what was happening and I developed a really small test: ``` # test.py def py(): pass ``` I found out that running the sisyphus manager on the test (bare; without any `settings.py`) opened ~3k files, from my baseline of 524 opened files to 3254 opened files after running `sis m test.py`, according to `lsof | grep <user> | wc`. Besides that, I had the issue that every job triggered by the manager was adding exactly 105 opened files to the list of open file descriptors. However, I can't reproduce this starting from scratch, which leads me to think that it might be a problem about how our code base interacts with sisyphus (or just from our code base). I'll keep investigating and keep you tuned. Is this issue about opening too many files by sisyphus intended because of some sisyphus caching strategy or related work? Was this ever addressed? If you need more details, I'll be glad to provide them. Thank you in advance.
I had this happen again. It was with a relatively big setup, but I'm not sure what causes the issue yet since my manager shouldn't be opening many files, if any. Please find attached the corresponding stack trace from the manager [here](https://github.com/user-attachments/files/15509563/manager_too_many_open_files_github.txt). Note that the last newlines from the stack trace are relevant, since these represent me trying to write anything at all, and any character becoming a newline: > Moreover, the terminal (tmux pane) entered a state where every character I typed was converted into a newline character. As a result, I couldn't type any instruction, so I ended up killing the tmux pane and recreating it. I think this could be an interaction with the manager prompting me for my SSH password many times because I had left the tmux session, and then crashing, thus leaving the prompt in an unstable state (i.e. whatever you write when writing your SSH password key is transparent). Analyzing the stack trace I found that both issues (too many open files, and ssh key prompt) could be related. What sisyphus seems to be trying to do after each password prompt is running a subprocess with the `squeue` command (I'm running in SLURM, but this also used to happen in SGE as well, so it should be cluster-independent). Right now I'm running my setup with the `gateway="..."` option in `settings.py`, but I recall it could have happened without such an option. This happens for each 30 seconds, which is the time my sisyphus is configured to scan the queue. With an open file cap of 1024 in the manager (assuming sisyphus doesn't open any other files and any are opened to begin with), the time needed to reach the cap would be 1024 * 30 = 30k seconds = 8.5 hours. Even though the time cap is practically lower because there are more files opened in the manager, it makes sense given the lengths of duration in which I abandoned the ssh/tmux session (evening/night). I'll try to solve it on my end, but I think it could also make sense to try to fix it in sisyphus. How can we tell sisyphus to wait before running a new queue scan job? Maybe storing the last queue command issued and setting it to `None` after completion? Edit: a pretty easy solution would probably be setting the timeout of the corresponding SSH command queue scan to the number of seconds it takes for sisyphus to run a new queue command.
1,717,145,069,000
[]
Performance Issue
[ "sisyphus/aws_batch_engine.py:AWSBatchEngine.system_call", "sisyphus/load_sharing_facility_engine.py:LoadSharingFacilityEngine.system_call", "sisyphus/simple_linux_utility_for_resource_management_engine.py:SimpleLinuxUtilityForResourceManagementEngine.system_call", "sisyphus/son_of_grid_engine.py:SonOfGridEngine.system_call" ]
[]
vllm-project/vllm
vllm-project__vllm-4109
ec784b2526219cd96159a52074ab8cd4e684410a
diff --git a/requirements-common.txt b/requirements-common.txt index f41873570aa67..bf9987e3af014 100644 --- a/requirements-common.txt +++ b/requirements-common.txt @@ -17,6 +17,6 @@ prometheus_client >= 0.18.0 prometheus-fastapi-instrumentator >= 7.0.0 tiktoken >= 0.6.0 # Required for DBRX tokenizer lm-format-enforcer == 0.10.1 -outlines == 0.0.34 # Requires torch >= 2.1.0 +outlines >= 0.0.43 # Requires torch >= 2.1.0 typing_extensions filelock >= 3.10.4 # filelock starts to support `mode` argument from 3.10.4 diff --git a/vllm/model_executor/guided_decoding/outlines_decoding.py b/vllm/model_executor/guided_decoding/outlines_decoding.py index 8403604286903..721f7e0530cb7 100644 --- a/vllm/model_executor/guided_decoding/outlines_decoding.py +++ b/vllm/model_executor/guided_decoding/outlines_decoding.py @@ -1,8 +1,6 @@ import asyncio import concurrent.futures -from copy import copy from enum import Enum -from functools import lru_cache from json import dumps as json_dumps from re import escape as regex_escape from typing import Tuple, Union @@ -54,8 +52,10 @@ class GuidedDecodingMode(Enum): async def get_outlines_guided_decoding_logits_processor( - request: Union[CompletionRequest, ChatCompletionRequest], - tokenizer) -> Union[JSONLogitsProcessor, RegexLogitsProcessor, None]: + request: Union[CompletionRequest, + ChatCompletionRequest], tokenizer: PreTrainedTokenizerBase +) -> Union[JSONLogitsProcessor, RegexLogitsProcessor, CFGLogitsProcessor, + None]: """ Given an OpenAI-compatible request, check for guided decoding parameters and get the necessary logits processor for the given guide. @@ -64,7 +64,7 @@ async def get_outlines_guided_decoding_logits_processor( """ global global_thread_pool guide, mode = _get_guide_and_mode(request) - if not guide: + if not guide or not mode: return None if global_thread_pool is None: @@ -72,15 +72,9 @@ async def get_outlines_guided_decoding_logits_processor( max_workers=2) loop = asyncio.get_running_loop() - result = await loop.run_in_executor(global_thread_pool, - _get_cached_logits_processor, guide, - tokenizer, mode, - request.guided_whitespace_pattern) - - logits_processor = copy(result) - # reset logits processor's internal state - logits_processor.init_state() - return logits_processor + return await loop.run_in_executor(global_thread_pool, + _get_logits_processor, guide, tokenizer, + mode, request.guided_whitespace_pattern) def _get_guide_and_mode( @@ -115,11 +109,10 @@ def _get_guide_and_mode( return None, None -@lru_cache(maxsize=32) -def _get_cached_logits_processor(guide: str, - tokenizer: PreTrainedTokenizerBase, - mode: GuidedDecodingMode, - whitespace_pattern: Union[str, None]): +def _get_logits_processor( + guide: str, tokenizer: PreTrainedTokenizerBase, mode: GuidedDecodingMode, + whitespace_pattern: Union[str, None] +) -> Union[JSONLogitsProcessor, RegexLogitsProcessor, CFGLogitsProcessor]: if mode == GuidedDecodingMode.JSON: return JSONLogitsProcessor(guide, tokenizer, whitespace_pattern) elif mode == GuidedDecodingMode.REGEX or mode == GuidedDecodingMode.CHOICE: diff --git a/vllm/model_executor/guided_decoding/outlines_logits_processors.py b/vllm/model_executor/guided_decoding/outlines_logits_processors.py index a131c6a1b92b4..1618705ff2983 100644 --- a/vllm/model_executor/guided_decoding/outlines_logits_processors.py +++ b/vllm/model_executor/guided_decoding/outlines_logits_processors.py @@ -21,7 +21,7 @@ from typing import Callable, DefaultDict, Dict, List, Union import torch -from outlines.fsm.fsm import CFGFSM, FSM, RegexFSM +from outlines.fsm.guide import CFGGuide, Generate, Guide, RegexGuide, Write from outlines.fsm.json_schema import build_regex_from_schema from pydantic import BaseModel from transformers import PreTrainedTokenizerBase @@ -29,28 +29,32 @@ class BaseLogitsProcessor: - def __init__(self): - # Child class should use initialize in their init. - self.fsm: FSM - - def init_state(self): - """Initialize the FSM states.""" - self.fsm_state: DefaultDict[int, int] = defaultdict(int) + def __init__(self, guide: Guide): + self._guide: Guide = guide + self._fsm_state: DefaultDict[int, int] = defaultdict(int) def __call__(self, input_ids: List[int], scores: torch.Tensor) -> torch.Tensor: """Use the FSM to bias the logits before sampling the next token.""" seq_id = hash(tuple(input_ids)) - if len(input_ids) == 0: - self.init_state() - else: + if len(input_ids) > 0: last_token = input_ids[-1] last_seq_id = hash(tuple(input_ids[:-1])) - self.fsm_state[seq_id] = self.fsm.next_state( - self.fsm_state[last_seq_id], last_token) + self._fsm_state[seq_id] = self._guide.get_next_state( + state=self._fsm_state[last_seq_id], token_id=last_token) + + instruction = self._guide.get_next_instruction( + state=self._fsm_state[seq_id]) - allowed_tokens = self.fsm.allowed_token_ids(self.fsm_state[seq_id]) + if type(instruction) == Generate: + allowed_tokens = instruction.tokens + elif type(instruction) == Write: + # TODO: support fast forward tokens + allowed_tokens = [instruction.tokens[0]] + else: + raise TypeError( + f"Unsupported instruction type {type(instruction)}") mask = torch.full((scores.shape[-1], ), -math.inf, @@ -62,6 +66,13 @@ def __call__(self, input_ids: List[int], class RegexLogitsProcessor(BaseLogitsProcessor): + @classmethod + @lru_cache(maxsize=32) + def _get_guide(cls, regex_string: str, + tokenizer: PreTrainedTokenizerBase) -> Guide: + tokenizer = _adapt_tokenizer(tokenizer) + return RegexGuide(regex_string, tokenizer) + def __init__(self, regex_string: str, tokenizer: PreTrainedTokenizerBase): """Compile the FSM that drives the regex-structured generation. @@ -73,9 +84,8 @@ def __init__(self, regex_string: str, tokenizer: PreTrainedTokenizerBase): The model's tokenizer """ - tokenizer = _adapt_tokenizer(tokenizer) - fsm = RegexFSM(regex_string, tokenizer) - self.fsm = fsm + super().__init__( + RegexLogitsProcessor._get_guide(regex_string, tokenizer)) class JSONLogitsProcessor(RegexLogitsProcessor): @@ -115,6 +125,12 @@ def __init__(self, schema: Union[str, Dict, BaseModel], class CFGLogitsProcessor(BaseLogitsProcessor): + @classmethod + @lru_cache(maxsize=32) + def _get_guide(cls, cfg: str, tokenizer: PreTrainedTokenizerBase) -> Guide: + tokenizer = _adapt_tokenizer(tokenizer) + return CFGGuide(cfg, tokenizer) + def __init__(self, cfg: str, tokenizer: PreTrainedTokenizerBase): """Compile the FSM that drives the context free grammar generation. @@ -126,17 +142,11 @@ def __init__(self, cfg: str, tokenizer: PreTrainedTokenizerBase): The model's tokenizer """ - tokenizer = _adapt_tokenizer(tokenizer) - fsm = CFGFSM(cfg, tokenizer) - self.fsm = fsm - - def init_state(self): - """Initialize state with a CFGFSM copy.""" - super().init_state() - self.fsm = self.fsm.copy() + super().__init__(CFGLogitsProcessor._get_guide(cfg, tokenizer)) + self._guide = self._guide.copy() -@lru_cache +@lru_cache(maxsize=32) def _adapt_tokenizer(tokenizer: PreTrainedTokenizerBase): """Adapt vLLM's tokenizer to use to compile the FSM.
diff --git a/tests/entrypoints/test_guided_processors.py b/tests/entrypoints/test_guided_processors.py index 5d4163e96fd87..fb32a9d155bc0 100644 --- a/tests/entrypoints/test_guided_processors.py +++ b/tests/entrypoints/test_guided_processors.py @@ -63,7 +63,6 @@ def test_guided_logits_processors(): tokenizer, whitespace_pattern=None) - regex_LP.init_state() token_ids = tokenizer.encode( f"Give an example IPv4 address with this regex: {TEST_REGEX}") tensor = torch.rand(32000) @@ -72,7 +71,6 @@ def test_guided_logits_processors(): assert tensor.shape == original_tensor.shape assert not torch.allclose(tensor, original_tensor) - json_LP.init_state() token_ids = tokenizer.encode( f"Give an employee profile that fits this schema: {TEST_SCHEMA}") tensor = torch.rand(32000)
[Feature]: Update Outlines Integration from `FSM` to `Guide` ### πŸš€ The feature, motivation and pitch Recently outlines updated their interface from FSM to Guide to support "acceleration"/"fast-forward" which will output next sets of tokens if they are directly available. For JSON schema, the cases are the keys, the `"`, and `}` etc. This is non-trivial but very useful to improve vLLM for. It should also help other framework like AICI #3714. ### Alternatives _No response_ ### Additional context _No response_
@simon-mo do you think this is an issue that could be driven by a community contribution? I'd be interested in working on it, but would most probably need a more knowledgeable counterpart to guide me through it. Yes. Contributions welcomed. Great. Here a first conceptual sketch: 1. introduce a new abstraction (or extend the logits processor interface) that can be implemented with the support of an outlines `Guide` (https://github.com/outlines-dev/outlines/blob/main/outlines/fsm/guide.py). 2. In essence there is the need to introduce a new call to the guided generation engine on each LLM generation step to check whether there are ff-tokens and if so, add them to the sequence. 3. This engine "instance" must be the same that is used in the `LogitsProcessor` layers in the models when there are no ff-tokens but logit biases. Would `execute_model` in `model_runner.py` be the right place to put this call? I'm not sure how well this sketch aligns with AICI's protocol (https://github.com/microsoft/aici/blob/main/docs/aicirt-proto.md) – it seems to be a bit more involved, but it might be close enough to count as a first step. What do you think? Is this the right direction? I would break this down into two steps. First for compatible we should migrate the interface without considering fast forward. Next when the spec decode framework is merged, we will supply the fast forward token because it becomes easy to add them by then.
1,713,254,764,000
[]
Feature Request
[ "vllm/model_executor/guided_decoding/outlines_decoding.py:get_outlines_guided_decoding_logits_processor", "vllm/model_executor/guided_decoding/outlines_decoding.py:_get_cached_logits_processor", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:BaseLogitsProcessor.__init__", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:BaseLogitsProcessor.init_state", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:BaseLogitsProcessor.__call__", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:RegexLogitsProcessor", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:RegexLogitsProcessor.__init__", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:CFGLogitsProcessor", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:CFGLogitsProcessor.__init__", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:CFGLogitsProcessor.init_state" ]
[ "vllm/model_executor/guided_decoding/outlines_decoding.py:_get_logits_processor", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:RegexLogitsProcessor._get_guide", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:RegexLogitsProcessor", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:CFGLogitsProcessor._get_guide", "vllm/model_executor/guided_decoding/outlines_logits_processors.py:CFGLogitsProcessor" ]
sympy/sympy
sympy__sympy-27223
d293133e81194adc11177729af91c970f092a6e7
diff --git a/sympy/utilities/lambdify.py b/sympy/utilities/lambdify.py index a84d1a1c26c1..518f5cb67bf5 100644 --- a/sympy/utilities/lambdify.py +++ b/sympy/utilities/lambdify.py @@ -11,6 +11,7 @@ import keyword import textwrap import linecache +import weakref # Required despite static analysis claiming it is not used from sympy.external import import_module # noqa:F401 @@ -907,8 +908,17 @@ def _lambdifygenerated(x): # mtime has to be None or else linecache.checkcache will remove it linecache.cache[filename] = (len(funcstr), None, funcstr.splitlines(True), filename) # type: ignore + # Remove the entry from the linecache when the object is garbage collected + def cleanup_linecache(filename): + def _cleanup(): + if filename in linecache.cache: + del linecache.cache[filename] + return _cleanup + func = funclocals[funcname] + weakref.finalize(func, cleanup_linecache(filename)) + # Apply the docstring sig = "func({})".format(", ".join(str(i) for i in names)) sig = textwrap.fill(sig, subsequent_indent=' '*8)
diff --git a/sympy/utilities/tests/test_lambdify.py b/sympy/utilities/tests/test_lambdify.py index 4a82290569ea..428cbaed92b6 100644 --- a/sympy/utilities/tests/test_lambdify.py +++ b/sympy/utilities/tests/test_lambdify.py @@ -1,6 +1,8 @@ from itertools import product import math import inspect +import linecache +import gc import mpmath @@ -981,6 +983,18 @@ def test_lambdify_docstring(): assert func.__doc__.splitlines()[:len(ref)] == ref +def test_lambdify_linecache(): + func = lambdify(x, x + 1) + source = 'def _lambdifygenerated(x):\n return x + 1\n' + assert inspect.getsource(func) == source + filename = inspect.getsourcefile(func) + assert filename.startswith('<lambdifygenerated-') + assert filename in linecache.cache + assert linecache.cache[filename] == (len(source), None, source.splitlines(True), filename) + del func + gc.collect() + assert filename not in linecache.cache + #================== Test special printers ==========================
Memory Leak in `sympy.lambdify` Hi there, I'm working with an [algorithm](https://github.com/SymposiumOrganization/NeuralSymbolicRegressionThatScales) that relies on calling `sympy.lambdify` hundreds of millions of times (~200M) and noticed the memory usage of the process steadily creeping up and eventually crashing the program. I was able to trace the problem to the following (see **Demo** below). Notably, the memory usage still increases despite 1. lambdifying the same equation again 2. "deleting" the lambdified equation **System Information:** WSL2 (Windows 10.0.22631 Build 22631) ``` > lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.3 LTS Release: 22.04 Codename: jammy ``` ``` > python --version Python 3.11.10 ``` ``` > pip freeze mpmath==1.3.0 psutil==6.1.0 sympy==1.13.3 ``` ### Demo: ```python # sympy_memory.py import argparse import psutil import sympy if __name__ == '__main__': # Parse command line arguments parser = argparse.ArgumentParser(description='Run equation n times') parser.add_argument('-n', type=int, default=1, help='Number of repetitions') args = parser.parse_args() # Get the current process to monitor memory usage process = psutil.Process() # Parse an example equation equation_str = 'x' parsed_equation = sympy.sympify(equation_str) print("Memory Usage:") for i in range(args.n): # Lambdifying the (same) equation in each iteration lambdified_equation = sympy.lambdify('x', parsed_equation) # Print memory usage every 10000 iterations if i % 10000 == 0: mem_usage = process.memory_info().rss / 1024 / 1024 # Convert to MB print(f"{i}/{args.n}: {mem_usage:.2f} MB") # "Deleting" the lambdified equation del lambdified_equation ``` ``` > python sympy_memory.py -n 100000 Memory Usage: 0/100000: 49.91 MB 10000/100000: 54.29 MB 20000/100000: 59.27 MB 30000/100000: 64.17 MB 40000/100000: 68.04 MB 50000/100000: 72.05 MB 60000/100000: 75.91 MB 70000/100000: 79.78 MB 80000/100000: 83.45 MB 90000/100000: 89.08 MB ``` **Observation:** The memory usage increases by about ~~+390KB~~ (correction: 390B https://github.com/sympy/sympy/issues/27216#issuecomment-2453168997) per lambdified equation, totalling +39MB over 100k lambdifications and deletions. ### Control (to exclude external factors): ```diff parsed_equation = sympy.sympify(equation_str) + lambdified_equation = sympy.lambdify('x', parsed_equation) print("Memory Usage:") for i in range(args.n): - # Lambdifying the (same) equation in each iteration - lambdified_equation = sympy.lambdify('x', parsed_equation) # Print memory usage every 10000 iterations if i % 10000 == 0: mem_usage = process.memory_info().rss / 1024 / 1024 # Convert to MB print(f"{i}/{args.n}: {mem_usage:.2f} MB") - # "Deleting" the lambdified equation - del lambdified_equation ``` ``` > python sympy_memory.py -n 100000 Memory Usage: 0/100000: 49.91 MB 10000/100000: 49.91 MB 20000/100000: 49.91 MB 30000/100000: 49.91 MB 40000/100000: 49.91 MB 50000/100000: 49.91 MB 60000/100000: 49.91 MB 70000/100000: 49.91 MB 80000/100000: 49.91 MB 90000/100000: 49.91 MB ``` **Observation:** The memory usage does not increase. Memory Leak in `sympy.lambdify` Hi there, I'm working with an [algorithm](https://github.com/SymposiumOrganization/NeuralSymbolicRegressionThatScales) that relies on calling `sympy.lambdify` hundreds of millions of times (~200M) and noticed the memory usage of the process steadily creeping up and eventually crashing the program. I was able to trace the problem to the following (see **Demo** below). Notably, the memory usage still increases despite 1. lambdifying the same equation again 2. "deleting" the lambdified equation **System Information:** WSL2 (Windows 10.0.22631 Build 22631) ``` > lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.3 LTS Release: 22.04 Codename: jammy ``` ``` > python --version Python 3.11.10 ``` ``` > pip freeze mpmath==1.3.0 psutil==6.1.0 sympy==1.13.3 ``` ### Demo: ```python # sympy_memory.py import argparse import psutil import sympy if __name__ == '__main__': # Parse command line arguments parser = argparse.ArgumentParser(description='Run equation n times') parser.add_argument('-n', type=int, default=1, help='Number of repetitions') args = parser.parse_args() # Get the current process to monitor memory usage process = psutil.Process() # Parse an example equation equation_str = 'x' parsed_equation = sympy.sympify(equation_str) print("Memory Usage:") for i in range(args.n): # Lambdifying the (same) equation in each iteration lambdified_equation = sympy.lambdify('x', parsed_equation) # Print memory usage every 10000 iterations if i % 10000 == 0: mem_usage = process.memory_info().rss / 1024 / 1024 # Convert to MB print(f"{i}/{args.n}: {mem_usage:.2f} MB") # "Deleting" the lambdified equation del lambdified_equation ``` ``` > python sympy_memory.py -n 100000 Memory Usage: 0/100000: 49.91 MB 10000/100000: 54.29 MB 20000/100000: 59.27 MB 30000/100000: 64.17 MB 40000/100000: 68.04 MB 50000/100000: 72.05 MB 60000/100000: 75.91 MB 70000/100000: 79.78 MB 80000/100000: 83.45 MB 90000/100000: 89.08 MB ``` **Observation:** The memory usage increases by about ~~+390KB~~ (correction: 390B https://github.com/sympy/sympy/issues/27216#issuecomment-2453168997) per lambdified equation, totalling +39MB over 100k lambdifications and deletions. ### Control (to exclude external factors): ```diff parsed_equation = sympy.sympify(equation_str) + lambdified_equation = sympy.lambdify('x', parsed_equation) print("Memory Usage:") for i in range(args.n): - # Lambdifying the (same) equation in each iteration - lambdified_equation = sympy.lambdify('x', parsed_equation) # Print memory usage every 10000 iterations if i % 10000 == 0: mem_usage = process.memory_info().rss / 1024 / 1024 # Convert to MB print(f"{i}/{args.n}: {mem_usage:.2f} MB") - # "Deleting" the lambdified equation - del lambdified_equation ``` ``` > python sympy_memory.py -n 100000 Memory Usage: 0/100000: 49.91 MB 10000/100000: 49.91 MB 20000/100000: 49.91 MB 30000/100000: 49.91 MB 40000/100000: 49.91 MB 50000/100000: 49.91 MB 60000/100000: 49.91 MB 70000/100000: 49.91 MB 80000/100000: 49.91 MB 90000/100000: 49.91 MB ``` **Observation:** The memory usage does not increase.
> The memory usage increases by about +390KB per lambdified equation I assume you mean per 10000 lambdified equations so it is about 400 bytes per lambdified equation. My guess is that each call to lambdify creates a Dummy and then something creates a polynomial ring with that dummy and the polynomial ring never gets deleted. The memory leak is from this line: https://github.com/sympy/sympy/blob/d293133e81194adc11177729af91c970f092a6e7/sympy/utilities/lambdify.py#L908 I assume that the purpose of the line is something to do with tracebacks but I don't know lambdify so well. > > The memory usage increases by about +390KB per lambdified equation > > I assume you mean per 10000 lambdified equations so it is about 400 bytes per lambdified equation. Correct, (89.08MB - 49.91MB) / 100,000 is about 390B per equation. I experimented with wrapping the function in an object that would clear the cache upon deletion of the function which significantly reduces the accumulated memory. ### Current: https://github.com/sympy/sympy/blob/d293133e81194adc11177729af91c970f092a6e7/sympy/utilities/lambdify.py#L933 ### Experimental: Modified `sympy/utilities/lambdify.py`: ``` diff func.__doc__ = ( "Created with lambdify. Signature:\n\n" "{sig}\n\n" "Expression:\n\n" "{expr}\n\n" "Source code:\n\n" "{src}\n\n" "Imported modules:\n\n" "{imp_mods}" ).format(sig=sig, expr=expr_str, src=src_str, imp_mods='\n'.join(imp_mod_lines)) - return func + return DeletableLambdaFunction(func) + +class DeletableLambdaFunction: + def __init__(self, func): + self.func = func + + def __call__(self, *args, **kwargs): + return self.func(*args, **kwargs) + + def __del__(self): + linecache.clearcache() + del self.func + + def __doc__(self): + return self.func.__doc__ def _module_present(modname, modlist): if modname in modlist: return True for m in modlist: if hasattr(m, '__name__') and m.__name__ == modname: return True return False ``` ``` > python sympy_memory.py -n 100000 Memory Usage: 0/100000: 49.80 MB 10000/100000: 50.06 MB 20000/100000: 51.12 MB 30000/100000: 51.12 MB 40000/100000: 51.12 MB 50000/100000: 51.12 MB 60000/100000: 51.38 MB 70000/100000: 51.38 MB 80000/100000: 51.46 MB 90000/100000: 51.46 MB ``` First question is whether there is a good reason to add to linecache at all. It was added in 8a892b2 from gh-14739 Inserting into `linecache` is needed for the debugger to be able to show the source of the compiled function. Also clearing linecache is a global operation which interfers with other libraries, so we would need to selectively delete only the relevant entries from a `__del__` method. edit: There's a [relevant issue](https://github.com/pytorch/pytorch/issues/62917) at pytorch which links to their current implementation and their plan to move to lazycache. Perhaps that's the proper fix, I don't know for sure. A simple solution would be an option for `lambdify` to disable adding to linecache like `lambdify(..., linecache=False)`. I don't see how lazycache would solve this issue. lazycache lets you delay computation of the lines (the idea is to avoid unnecessary I/O in linecache). But for lambdify that would either mean just storing the lines elsewhere in the function closure, or recomputing lambdify in it. The latter could be bug prone if any part of lambdify relied on some state that changed since the original call. I think it should be possible to fix this by using `weakref.finalize` https://docs.python.org/3/library/weakref.html#weakref.finalize to clear the linecache entry whenever the function is garbage collected. (it would also be nice if linecache supported this directly; maybe someone can open an issue/PR to CPython) > The memory usage increases by about +390KB per lambdified equation I assume you mean per 10000 lambdified equations so it is about 400 bytes per lambdified equation. My guess is that each call to lambdify creates a Dummy and then something creates a polynomial ring with that dummy and the polynomial ring never gets deleted. The memory leak is from this line: https://github.com/sympy/sympy/blob/d293133e81194adc11177729af91c970f092a6e7/sympy/utilities/lambdify.py#L908 I assume that the purpose of the line is something to do with tracebacks but I don't know lambdify so well. > > The memory usage increases by about +390KB per lambdified equation > > I assume you mean per 10000 lambdified equations so it is about 400 bytes per lambdified equation. Correct, (89.08MB - 49.91MB) / 100,000 is about 390B per equation. I experimented with wrapping the function in an object that would clear the cache upon deletion of the function which significantly reduces the accumulated memory. ### Current: https://github.com/sympy/sympy/blob/d293133e81194adc11177729af91c970f092a6e7/sympy/utilities/lambdify.py#L933 ### Experimental: Modified `sympy/utilities/lambdify.py`: ``` diff func.__doc__ = ( "Created with lambdify. Signature:\n\n" "{sig}\n\n" "Expression:\n\n" "{expr}\n\n" "Source code:\n\n" "{src}\n\n" "Imported modules:\n\n" "{imp_mods}" ).format(sig=sig, expr=expr_str, src=src_str, imp_mods='\n'.join(imp_mod_lines)) - return func + return DeletableLambdaFunction(func) + +class DeletableLambdaFunction: + def __init__(self, func): + self.func = func + + def __call__(self, *args, **kwargs): + return self.func(*args, **kwargs) + + def __del__(self): + linecache.clearcache() + del self.func + + def __doc__(self): + return self.func.__doc__ def _module_present(modname, modlist): if modname in modlist: return True for m in modlist: if hasattr(m, '__name__') and m.__name__ == modname: return True return False ``` ``` > python sympy_memory.py -n 100000 Memory Usage: 0/100000: 49.80 MB 10000/100000: 50.06 MB 20000/100000: 51.12 MB 30000/100000: 51.12 MB 40000/100000: 51.12 MB 50000/100000: 51.12 MB 60000/100000: 51.38 MB 70000/100000: 51.38 MB 80000/100000: 51.46 MB 90000/100000: 51.46 MB ``` First question is whether there is a good reason to add to linecache at all. It was added in 8a892b2 from gh-14739 Inserting into `linecache` is needed for the debugger to be able to show the source of the compiled function. Also clearing linecache is a global operation which interfers with other libraries, so we would need to selectively delete only the relevant entries from a `__del__` method. edit: There's a [relevant issue](https://github.com/pytorch/pytorch/issues/62917) at pytorch which links to their current implementation and their plan to move to lazycache. Perhaps that's the proper fix, I don't know for sure. A simple solution would be an option for `lambdify` to disable adding to linecache like `lambdify(..., linecache=False)`. I don't see how lazycache would solve this issue. lazycache lets you delay computation of the lines (the idea is to avoid unnecessary I/O in linecache). But for lambdify that would either mean just storing the lines elsewhere in the function closure, or recomputing lambdify in it. The latter could be bug prone if any part of lambdify relied on some state that changed since the original call. I think it should be possible to fix this by using `weakref.finalize` https://docs.python.org/3/library/weakref.html#weakref.finalize to clear the linecache entry whenever the function is garbage collected. (it would also be nice if linecache supported this directly; maybe someone can open an issue/PR to CPython)
1,730,837,688,000
[ "utilities.lambdify" ]
Bug Report
[ "sympy/utilities/lambdify.py:lambdify" ]
[]
modin-project/modin
modin-project__modin-6836
097ea527c8e3f099e1f252b067a1d5eb055ad0b5
diff --git a/modin/core/dataframe/algebra/binary.py b/modin/core/dataframe/algebra/binary.py index f19040cc104..af0c6ee7e8e 100644 --- a/modin/core/dataframe/algebra/binary.py +++ b/modin/core/dataframe/algebra/binary.py @@ -415,7 +415,9 @@ def caller( ): shape_hint = "column" new_modin_frame = query_compiler._modin_frame.map( - lambda df: func(df, other, *args, **kwargs), + func, + func_args=(other, *args), + func_kwargs=kwargs, dtypes=dtypes, ) return query_compiler.__constructor__( diff --git a/modin/core/dataframe/pandas/dataframe/dataframe.py b/modin/core/dataframe/pandas/dataframe/dataframe.py index 7da33f52284..029de373f75 100644 --- a/modin/core/dataframe/pandas/dataframe/dataframe.py +++ b/modin/core/dataframe/pandas/dataframe/dataframe.py @@ -2093,6 +2093,8 @@ def map( func: Callable, dtypes: Optional[str] = None, new_columns: Optional[pandas.Index] = None, + func_args=None, + func_kwargs=None, ) -> "PandasDataframe": """ Perform a function that maps across the entire dataset. @@ -2108,13 +2110,19 @@ def map( new_columns : pandas.Index, optional New column labels of the result, its length has to be identical to the older columns. If not specified, old column labels are preserved. + func_args : iterable, optional + Positional arguments for the 'func' callable. + func_kwargs : dict, optional + Keyword arguments for the 'func' callable. Returns ------- PandasDataframe A new dataframe. """ - new_partitions = self._partition_mgr_cls.map_partitions(self._partitions, func) + new_partitions = self._partition_mgr_cls.map_partitions( + self._partitions, func, func_args, func_kwargs + ) if new_columns is not None and self.has_materialized_columns: assert len(new_columns) == len( self.columns diff --git a/modin/core/dataframe/pandas/partitioning/partition_manager.py b/modin/core/dataframe/pandas/partitioning/partition_manager.py index 3a1dd63e555..0e9d35cf545 100644 --- a/modin/core/dataframe/pandas/partitioning/partition_manager.py +++ b/modin/core/dataframe/pandas/partitioning/partition_manager.py @@ -566,7 +566,13 @@ def broadcast_axis_partitions( @classmethod @wait_computations_if_benchmark_mode - def map_partitions(cls, partitions, map_func): + def map_partitions( + cls, + partitions, + map_func, + func_args=None, + func_kwargs=None, + ): """ Apply `map_func` to every partition in `partitions`. @@ -576,6 +582,10 @@ def map_partitions(cls, partitions, map_func): Partitions housing the data of Modin Frame. map_func : callable Function to apply. + func_args : iterable, optional + Positional arguments for the 'map_func'. + func_kwargs : dict, optional + Keyword arguments for the 'map_func'. Returns ------- @@ -585,14 +595,23 @@ def map_partitions(cls, partitions, map_func): preprocessed_map_func = cls.preprocess_func(map_func) return np.array( [ - [part.apply(preprocessed_map_func) for part in row_of_parts] + [ + part.apply( + preprocessed_map_func, + *func_args if func_args is not None else (), + **func_kwargs if func_kwargs is not None else {}, + ) + for part in row_of_parts + ] for row_of_parts in partitions ] ) @classmethod @wait_computations_if_benchmark_mode - def lazy_map_partitions(cls, partitions, map_func, func_args=None): + def lazy_map_partitions( + cls, partitions, map_func, func_args=None, func_kwargs=None + ): """ Apply `map_func` to every partition in `partitions` *lazily*. @@ -604,6 +623,8 @@ def lazy_map_partitions(cls, partitions, map_func, func_args=None): Function to apply. func_args : iterable, optional Positional arguments for the 'map_func'. + func_kwargs : dict, optional + Keyword arguments for the 'map_func'. Returns ------- @@ -616,7 +637,8 @@ def lazy_map_partitions(cls, partitions, map_func, func_args=None): [ part.add_to_apply_calls( preprocessed_map_func, - *(tuple() if func_args is None else func_args), + *func_args if func_args is not None else (), + **func_kwargs if func_kwargs is not None else {}, ) for part in row ] diff --git a/modin/core/execution/ray/common/engine_wrapper.py b/modin/core/execution/ray/common/engine_wrapper.py index 8e20033d20d..e274d28c764 100644 --- a/modin/core/execution/ray/common/engine_wrapper.py +++ b/modin/core/execution/ray/common/engine_wrapper.py @@ -18,10 +18,14 @@ """ import asyncio +import os +from types import FunctionType import ray from ray.util.client.common import ClientObjectRef +from modin.error_message import ErrorMessage + @ray.remote def _deploy_ray_func(func, *args, **kwargs): # pragma: no cover @@ -48,6 +52,8 @@ def _deploy_ray_func(func, *args, **kwargs): # pragma: no cover class RayWrapper: """Mixin that provides means of running functions remotely and getting local results.""" + _func_cache = {} + @classmethod def deploy(cls, func, f_args=None, f_kwargs=None, num_returns=1): """ @@ -127,6 +133,19 @@ def put(cls, data, **kwargs): ray.ObjectID Ray object identifier to get the value by. """ + if isinstance(data, FunctionType): + qname = data.__qualname__ + if "<locals>" not in qname and "<lambda>" not in qname: + ref = cls._func_cache.get(data, None) + if ref is None: + if len(cls._func_cache) < 1024: + ref = ray.put(data) + cls._func_cache[data] = ref + else: + msg = "To many functions in the RayWrapper cache!" + assert "MODIN_GITHUB_CI" not in os.environ, msg + ErrorMessage.warn(msg) + return ref return ray.put(data, **kwargs) @classmethod
FEAT: Do not put binary functions to the Ray storage multiple times. Currently, the binary operations are wrapped into lambdas which are put into the Ray storage on each operation.
1,703,167,333,000
[]
Feature Request
[ "modin/core/dataframe/algebra/binary.py:Binary.register", "modin/core/dataframe/pandas/dataframe/dataframe.py:PandasDataframe.map", "modin/core/dataframe/pandas/partitioning/partition_manager.py:PandasDataframePartitionManager.map_partitions", "modin/core/dataframe/pandas/partitioning/partition_manager.py:PandasDataframePartitionManager.lazy_map_partitions", "modin/core/execution/ray/common/engine_wrapper.py:RayWrapper", "modin/core/execution/ray/common/engine_wrapper.py:RayWrapper.put" ]
[]
Open-MSS/MSS
Open-MSS__MSS-1967
56e9528b552a9d8f2e267661473b8f0e724fd093
diff --git a/.github/workflows/python-flake8.yml b/.github/workflows/python-flake8.yml index b578708e4..0e9003135 100644 --- a/.github/workflows/python-flake8.yml +++ b/.github/workflows/python-flake8.yml @@ -19,10 +19,10 @@ jobs: timeout-minutes: 10 steps: - uses: actions/checkout@v3 - - name: Set up Python 3.8 + - name: Set up Python 3.10 uses: actions/setup-python@v3 with: - python-version: 3.8 + python-version: "3.10" - name: Lint with flake8 run: | python -m pip install --upgrade pip diff --git a/mslib/msui/mscolab.py b/mslib/msui/mscolab.py index a6a2f0662..2795391e0 100644 --- a/mslib/msui/mscolab.py +++ b/mslib/msui/mscolab.py @@ -872,8 +872,11 @@ def add_operation(self): self.logout() else: if r.text == "True": - self.error_dialog = QtWidgets.QErrorMessage() - self.error_dialog.showMessage('Your operation was created successfully') + QtWidgets.QMessageBox.information( + self.ui, + "Creation successful", + "Your operation was created successfully.", + ) op_id = self.get_recent_op_id() self.new_op_id = op_id self.conn.handle_new_operation(op_id) @@ -1130,8 +1133,11 @@ def change_category_handler(self): if r.text == "True": self.active_operation_category = entered_operation_category self.reload_operation_list() - self.error_dialog = QtWidgets.QErrorMessage() - self.error_dialog.showMessage("Description is updated successfully.") + QtWidgets.QMessageBox.information( + self.ui, + "Update successful", + "Category is updated successfully.", + ) else: show_popup(self.ui, "Error", "Your Connection is expired. New Login required!") self.logout() @@ -1166,8 +1172,11 @@ def change_description_handler(self): self.set_operation_desc_label(entered_operation_desc) self.reload_operation_list() - self.error_dialog = QtWidgets.QErrorMessage() - self.error_dialog.showMessage("Description is updated successfully.") + QtWidgets.QMessageBox.information( + self.ui, + "Update successful", + "Description is updated successfully.", + ) else: show_popup(self.ui, "Error", "Your Connection is expired. New Login required!") self.logout() @@ -1207,8 +1216,11 @@ def rename_operation_handler(self): # Update other user's operation list self.conn.signal_operation_list_updated.connect(self.reload_operation_list) - self.error_dialog = QtWidgets.QErrorMessage() - self.error_dialog.showMessage("Operation is renamed successfully.") + QtWidgets.QMessageBox.information( + self.ui, + "Rename successful", + "Operation is renamed successfully.", + ) else: show_popup(self.ui, "Error", "Your Connection is expired. New Login required!") self.logout() @@ -1928,6 +1940,12 @@ def logout(self): return self.ui.local_active = True self.ui.menu_handler() + + # disconnect socket + if self.conn is not None: + self.conn.disconnect() + self.conn = None + # close all hanging window self.close_external_windows() self.hide_operation_options() @@ -1960,10 +1978,6 @@ def logout(self): self.ui.activeOperationDesc.setText(self.ui.tr("Select Operation to View Description.")) # set usernameLabel back to default self.ui.usernameLabel.setText("User") - # disconnect socket - if self.conn is not None: - self.conn.disconnect() - self.conn = None # Turn off work locally toggle self.ui.workLocallyCheckbox.blockSignals(True) self.ui.workLocallyCheckbox.setChecked(False) @@ -1986,11 +2000,9 @@ def logout(self): self.operation_archive_browser.hide() - # Don't try to activate local flighttrack while testing - if "pytest" not in sys.modules: - # activate first local flighttrack after logging out - self.ui.listFlightTracks.setCurrentRow(0) - self.ui.activate_selected_flight_track() + # activate first local flighttrack after logging out + self.ui.listFlightTracks.setCurrentRow(0) + self.ui.activate_selected_flight_track() class MscolabMergeWaypointsDialog(QtWidgets.QDialog, merge_wp_ui.Ui_MergeWaypointsDialog): diff --git a/mslib/msui/socket_control.py b/mslib/msui/socket_control.py index 7302b7a37..840081905 100644 --- a/mslib/msui/socket_control.py +++ b/mslib/msui/socket_control.py @@ -195,4 +195,24 @@ def save_file(self, token, op_id, content, comment=None): self.signal_reload.emit(op_id) def disconnect(self): + # Get all pyqtSignals defined in this class and disconnect them from all slots + allSignals = { + attr + for attr in dir(self.__class__) + if isinstance(getattr(self.__class__, attr), QtCore.pyqtSignal) + } + inheritedSignals = { + attr + for base_class in self.__class__.__bases__ + for attr in dir(base_class) + if isinstance(getattr(base_class, attr), QtCore.pyqtSignal) + } + signals = {getattr(self, signal) for signal in allSignals - inheritedSignals} + for signal in signals: + try: + signal.disconnect() + except TypeError: + # The disconnect call can fail if there are no connected slots, so catch that error here + pass + self.sio.disconnect() diff --git a/mslib/utils/airdata.py b/mslib/utils/airdata.py index b7ae556d2..e105147b6 100644 --- a/mslib/utils/airdata.py +++ b/mslib/utils/airdata.py @@ -252,7 +252,7 @@ def get_airspaces(countries=None): for data in airspace_data["polygon"].split(",")] _airspaces.append(airspace_data) _airspaces_mtime[file] = os.path.getmtime(os.path.join(OSDIR, "downloads", "aip", file)) - else: - QtWidgets.QMessageBox.information(None, "No Airspaces data in file:", f"{file}") + else: + QtWidgets.QMessageBox.information(None, "No Airspaces data in file:", f"{file}") return _airspaces
diff --git a/conftest.py b/conftest.py index be546d782..83f33ca85 100644 --- a/conftest.py +++ b/conftest.py @@ -211,9 +211,8 @@ def _load_module(module_name, path): @pytest.fixture(autouse=True) -def close_open_windows(): - """ - Closes all windows after every test +def fail_if_open_message_boxes_left(): + """Fail a test if there are any Qt message boxes left open at the end """ # Mock every MessageBox widget in the test suite to avoid unwanted freezes on unhandled error popups etc. with mock.patch("PyQt5.QtWidgets.QMessageBox.question") as q, \ @@ -224,7 +223,7 @@ def close_open_windows(): if any(box.call_count > 0 for box in [q, i, c, w]): summary = "\n".join([f"PyQt5.QtWidgets.QMessageBox.{box()._extract_mock_name()}: {box.mock_calls[:-1]}" for box in [q, i, c, w] if box.call_count > 0]) - warnings.warn(f"An unhandled message box popped up during your test!\n{summary}") + pytest.fail(f"An unhandled message box popped up during your test!\n{summary}") # Try to close all remaining widgets after each test diff --git a/tests/_test_msui/test_mscolab.py b/tests/_test_msui/test_mscolab.py index 7308986c2..14a8c254c 100644 --- a/tests/_test_msui/test_mscolab.py +++ b/tests/_test_msui/test_mscolab.py @@ -38,7 +38,7 @@ from mslib.mscolab.models import Permission, User from mslib.msui.flighttrack import WaypointsTableModel from PyQt5 import QtCore, QtTest, QtWidgets -from mslib.utils.config import read_config_file, config_loader +from mslib.utils.config import read_config_file, config_loader, modify_config_file from tests.utils import mscolab_start_server, create_msui_settings_file, ExceptionMock from mslib.msui import msui from mslib.msui import mscolab @@ -64,6 +64,7 @@ def setup_method(self): QtTest.QTest.qWait(500) self.application = QtWidgets.QApplication(sys.argv) self.main_window = msui.MSUIMainWindow(mscolab_data_dir=mscolab_settings.MSCOLAB_DATA_DIR) + self.main_window.create_new_flight_track() self.main_window.show() self.window = mscolab.MSColab_ConnectDialog(parent=self.main_window, mscolab=self.main_window.mscolab) self.window.urlCb.setEditText(self.url) @@ -122,6 +123,7 @@ def test_disconnect(self): def test_login(self): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(self.userdata[0], self.userdata[2]) QtWidgets.QApplication.processEvents() # show logged in widgets @@ -132,9 +134,31 @@ def test_login(self): # test operation listing visibility assert self.main_window.listOperationsMSC.model().rowCount() == 1 + @mock.patch("PyQt5.QtWidgets.QMessageBox.question", return_value=QtWidgets.QMessageBox.Yes) + def test_login_with_different_account_shows_update_credentials_popup(self, mockbox): + self._connect_to_mscolab() + connect_window = self.main_window.mscolab.connect_window + self._login(self.userdata[0], self.userdata[2]) + QtWidgets.QApplication.processEvents() + mockbox.assert_called_once_with( + connect_window, + "Update Credentials", + "You are using new credentials. Should your settings file be updated with the new credentials?", + mock.ANY, + mock.ANY, + ) + # show logged in widgets + assert self.main_window.usernameLabel.text() == self.userdata[1] + assert self.main_window.connectBtn.isVisible() is False + assert self.main_window.mscolab.connect_window is None + assert self.main_window.local_active is True + # test operation listing visibility + assert self.main_window.listOperationsMSC.model().rowCount() == 1 + def test_logout_action_trigger(self): # Login self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(self.userdata[0], self.userdata[2]) QtWidgets.QApplication.processEvents() assert self.main_window.usernameLabel.text() == self.userdata[1] @@ -149,6 +173,7 @@ def test_logout_action_trigger(self): def test_logout(self): # Login self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(self.userdata[0], self.userdata[2]) QtWidgets.QApplication.processEvents() assert self.main_window.usernameLabel.text() == self.userdata[1] @@ -163,6 +188,7 @@ def test_logout(self): @mock.patch("PyQt5.QtWidgets.QMessageBox.question", return_value=QtWidgets.QMessageBox.Yes) def test_add_user(self, mockmessage): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") assert config_loader(dataset="MSS_auth").get(self.url) == "something@something.org" assert mslib.utils.auth.get_password_from_keyring("MSCOLAB", @@ -197,6 +223,7 @@ def test_add_users_with_updating_credentials_in_config_file(self, mockmessage): assert config_loader(dataset="MSS_auth").get(self.url) == "something@something.org" self._connect_to_mscolab() assert self.window.mscolab_server_url is not None + modify_config_file({"MSS_auth": {self.url: "anand@something.org"}}) self._create_user("anand", "anand@something.org", "anand_pass") # check changed settings assert config_loader(dataset="MSS_auth").get(self.url) == "anand@something.org" @@ -277,6 +304,7 @@ def setup_method(self): QtTest.QTest.qWait(500) self.application = QtWidgets.QApplication(sys.argv) self.window = msui.MSUIMainWindow(mscolab_data_dir=mscolab_settings.MSCOLAB_DATA_DIR) + self.window.create_new_flight_track() self.window.show() def teardown_method(self): @@ -296,6 +324,7 @@ def teardown_method(self): def test_activate_operation(self): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(emailid=self.userdata[0], password=self.userdata[2]) # activate a operation self._activate_operation_at_index(0) @@ -305,6 +334,7 @@ def test_activate_operation(self): @mock.patch("PyQt5.QtWidgets.QMessageBox") def test_view_open(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(emailid=self.userdata[0], password=self.userdata[2]) # test after activating operation self._activate_operation_at_index(0) @@ -338,6 +368,7 @@ def test_view_open(self, mockbox): "Flight track (*.ftml)")) def test_handle_export(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(emailid=self.userdata[0], password=self.userdata[2]) self._activate_operation_at_index(0) self.window.actionExportFlightTrackFTML.trigger() @@ -362,6 +393,7 @@ def test_import_file(self, mockbox, ext): with mock.patch("PyQt5.QtWidgets.QFileDialog.getSaveFileName", return_value=(file_path, None)): with mock.patch("PyQt5.QtWidgets.QFileDialog.getOpenFileName", return_value=(file_path, None)): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(emailid=self.userdata[0], password=self.userdata[2]) self._activate_operation_at_index(0) exported_wp = WaypointsTableModel(waypoints=self.window.mscolab.waypoints_model.waypoints) @@ -393,6 +425,7 @@ def test_import_file(self, mockbox, ext): @pytest.mark.skip("Runs in a timeout locally > 60s") def test_work_locally_toggle(self): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(emailid=self.userdata[0], password=self.userdata[2]) self._activate_operation_at_index(0) self.window.workLocallyCheckbox.setChecked(True) @@ -413,6 +446,7 @@ def test_work_locally_toggle(self): @mock.patch("mslib.msui.mscolab.get_open_filename", return_value=os.path.join(sample_path, u"example.ftml")) def test_browse_add_operation(self, mockopen, mockmessage): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") assert self.window.listOperationsMSC.model().rowCount() == 0 self.window.actionAddOperation.trigger() @@ -436,59 +470,75 @@ def test_browse_add_operation(self, mockopen, mockmessage): assert item.operation_path == "example" assert item.access_level == "creator" - @mock.patch("PyQt5.QtWidgets.QErrorMessage") - def test_add_operation(self, mockbox): + def test_add_operation(self): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") assert self.window.usernameLabel.text() == 'something' assert self.window.connectBtn.isVisible() is False - self._create_operation("Alpha", "Description Alpha") - assert mockbox.return_value.showMessage.call_count == 1 - with mock.patch("PyQt5.QtWidgets.QLineEdit.text", return_value=None): + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self._create_operation("Alpha", "Description Alpha") + m.assert_called_once_with( + self.window, + "Creation successful", + "Your operation was created successfully.", + ) + with (mock.patch("PyQt5.QtWidgets.QLineEdit.text", return_value=None), + mock.patch("PyQt5.QtWidgets.QErrorMessage.showMessage") as m): self._create_operation("Alpha2", "Description Alpha") - with mock.patch("PyQt5.QtWidgets.QTextEdit.toPlainText", return_value=None): + m.assert_called_once_with("Path can't be empty") + with (mock.patch("PyQt5.QtWidgets.QTextEdit.toPlainText", return_value=None), + mock.patch("PyQt5.QtWidgets.QErrorMessage.showMessage") as m): self._create_operation("Alpha3", "Description Alpha") - self._create_operation("/", "Description Alpha") - assert mockbox.return_value.showMessage.call_count == 4 + m.assert_called_once_with("Description can't be empty") + with mock.patch("PyQt5.QtWidgets.QErrorMessage.showMessage") as m: + self._create_operation("/", "Description Alpha") + m.assert_called_once_with("Path can't contain spaces or special characters") assert self.window.listOperationsMSC.model().rowCount() == 1 - self._create_operation("reproduce-test", "Description Test") + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self._create_operation("reproduce-test", "Description Test") + m.assert_called_once() assert self.window.listOperationsMSC.model().rowCount() == 2 self._activate_operation_at_index(0) assert self.window.mscolab.active_operation_name == "Alpha" self._activate_operation_at_index(1) assert self.window.mscolab.active_operation_name == "reproduce-test" - @mock.patch("PyQt5.QtWidgets.QMessageBox.information") @mock.patch("PyQt5.QtWidgets.QInputDialog.getText", return_value=("flight7", True)) - def test_handle_delete_operation(self, mocktext, mockbox): + def test_handle_delete_operation(self, mocktext): # pytest.skip('needs a review for the delete button pressed. Seems to delete a None operation') self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "berta@something.org"}}) self._create_user("berta", "berta@something.org", "something") assert self.window.usernameLabel.text() == 'berta' assert self.window.connectBtn.isVisible() is False assert self.window.listOperationsMSC.model().rowCount() == 0 operation_name = "flight7" - self._create_operation(operation_name, "Description flight7") + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self._create_operation(operation_name, "Description flight7") + m.assert_called_once() # check for operation dir is created on server assert os.path.isdir(os.path.join(mscolab_settings.MSCOLAB_DATA_DIR, operation_name)) self._activate_operation_at_index(0) op_id = self.window.mscolab.get_recent_op_id() assert op_id is not None assert self.window.listOperationsMSC.model().rowCount() == 1 - self.window.actionDeleteOperation.trigger() - QtWidgets.QApplication.processEvents() + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self.window.actionDeleteOperation.trigger() + QtWidgets.QApplication.processEvents() + m.assert_called_once_with(self.window, "Success", 'Operation "flight7" was deleted!') op_id = self.window.mscolab.get_recent_op_id() assert op_id is None QtWidgets.QApplication.processEvents() QtTest.QTest.qWait(0) # check operation dir name removed assert os.path.isdir(os.path.join(mscolab_settings.MSCOLAB_DATA_DIR, operation_name)) is False - assert mockbox.call_count == 1 @mock.patch("PyQt5.QtWidgets.QMessageBox.question", return_value=QtWidgets.QMessageBox.Yes) def test_handle_leave_operation(self, mockmessage): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata3[0]}}) self._login(self.userdata3[0], self.userdata3[2]) QtWidgets.QApplication.processEvents() assert self.window.usernameLabel.text() == self.userdata3[1] @@ -514,55 +564,68 @@ def test_handle_leave_operation(self, mockmessage): assert self.window.listViews.count() == 0 assert self.window.listOperationsMSC.model().rowCount() == 0 - @mock.patch("PyQt5.QtWidgets.QMessageBox.information") @mock.patch("PyQt5.QtWidgets.QInputDialog.getText", return_value=("new_name", True)) - def test_handle_rename_operation(self, mockbox, mockpatch): + def test_handle_rename_operation(self, mocktext): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") - self._create_operation("flight1234", "Description flight1234") + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self._create_operation("flight1234", "Description flight1234") + m.assert_called_once() assert self.window.listOperationsMSC.model().rowCount() == 1 self._activate_operation_at_index(0) assert self.window.mscolab.active_op_id is not None - self.window.actionRenameOperation.trigger() - QtWidgets.QApplication.processEvents() + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self.window.actionRenameOperation.trigger() + QtWidgets.QApplication.processEvents() + m.assert_called_once_with(self.window, "Rename successful", "Operation is renamed successfully.") QtTest.QTest.qWait(0) assert self.window.mscolab.active_op_id is not None assert self.window.mscolab.active_operation_name == "new_name" - @mock.patch("PyQt5.QtWidgets.QMessageBox.information") - @mock.patch("PyQt5.QtWidgets.QInputDialog.getText", return_value=("new_desciption", True)) - def test_update_description(self, mockbox, mockpatch): + @mock.patch("PyQt5.QtWidgets.QInputDialog.getText", return_value=("new_description", True)) + def test_update_description(self, mocktext): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") - self._create_operation("flight1234", "Description flight1234") + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self._create_operation("flight1234", "Description flight1234") + m.assert_called_once() assert self.window.listOperationsMSC.model().rowCount() == 1 self._activate_operation_at_index(0) assert self.window.mscolab.active_op_id is not None - self.window.actionChangeDescription.trigger() - QtWidgets.QApplication.processEvents() + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self.window.actionChangeDescription.trigger() + QtWidgets.QApplication.processEvents() + m.assert_called_once_with(self.window, "Update successful", "Description is updated successfully.") QtTest.QTest.qWait(0) assert self.window.mscolab.active_op_id is not None - assert self.window.mscolab.active_operation_description == "new_desciption" + assert self.window.mscolab.active_operation_description == "new_description" - @mock.patch("PyQt5.QtWidgets.QMessageBox.information") @mock.patch("PyQt5.QtWidgets.QInputDialog.getText", return_value=("new_category", True)) - def test_update_category(self, mockbox, mockpatch): + def test_update_category(self, mocktext): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") - self._create_operation("flight1234", "Description flight1234") + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self._create_operation("flight1234", "Description flight1234") + m.assert_called_once() assert self.window.listOperationsMSC.model().rowCount() == 1 assert self.window.mscolab.active_operation_category == "example" self._activate_operation_at_index(0) assert self.window.mscolab.active_op_id is not None - self.window.actionChangeCategory.trigger() - QtWidgets.QApplication.processEvents() + with mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) as m: + self.window.actionChangeCategory.trigger() + QtWidgets.QApplication.processEvents() + m.assert_called_once_with(self.window, "Update successful", "Category is updated successfully.") QtTest.QTest.qWait(0) assert self.window.mscolab.active_op_id is not None assert self.window.mscolab.active_operation_category == "new_category" @mock.patch("PyQt5.QtWidgets.QMessageBox.information") - def test_any_special_category(self, mockpatch): + def test_any_special_category(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") self._create_operation("flight1234", "Description flight1234") QtTest.QTest.qWait(0) @@ -580,8 +643,10 @@ def test_any_special_category(self, mockpatch): range(self.window.mscolab.ui.listOperationsMSC.count())] assert ["flight5678"] == operation_pathes - def test_get_recent_op_id(self): + @mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) + def test_get_recent_op_id(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "anton@something.org"}}) self._create_user("anton", "anton@something.org", "something") QtTest.QTest.qWait(100) assert self.window.usernameLabel.text() == 'anton' @@ -594,8 +659,10 @@ def test_get_recent_op_id(self): # ToDo fix number after cleanup initial data assert self.window.mscolab.get_recent_op_id() == current_op_id + 2 - def test_get_recent_operation(self): + @mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) + def test_get_recent_operation(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "berta@something.org"}}) self._create_user("berta", "berta@something.org", "something") QtTest.QTest.qWait(100) assert self.window.usernameLabel.text() == 'berta' @@ -607,8 +674,10 @@ def test_get_recent_operation(self): assert operation["path"] == "flight1234" assert operation["access_level"] == "creator" - def test_open_chat_window(self): + @mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) + def test_open_chat_window(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") self._create_operation("flight1234", "Description flight1234") assert self.window.listOperationsMSC.model().rowCount() == 1 @@ -619,8 +688,10 @@ def test_open_chat_window(self): QtTest.QTest.qWait(0) assert self.window.mscolab.chat_window is not None - def test_close_chat_window(self): + @mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) + def test_close_chat_window(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") self._create_operation("flight1234", "Description flight1234") assert self.window.listOperationsMSC.model().rowCount() == 1 @@ -631,8 +702,10 @@ def test_close_chat_window(self): self.window.mscolab.close_chat_window() assert self.window.mscolab.chat_window is None - def test_delete_operation_from_list(self): + @mock.patch("PyQt5.QtWidgets.QMessageBox.information", return_value=QtWidgets.QMessageBox.Ok) + def test_delete_operation_from_list(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "other@something.org"}}) self._create_user("other", "other@something.org", "something") assert self.window.usernameLabel.text() == 'other' assert self.window.connectBtn.isVisible() is False @@ -646,6 +719,7 @@ def test_delete_operation_from_list(self): @mock.patch("PyQt5.QtWidgets.QMessageBox.question", return_value=QtWidgets.QMessageBox.Yes) def test_user_delete(self, mockmessage): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") u_id = self.window.mscolab.user['id'] self.window.mscolab.open_profile_window() @@ -692,6 +766,7 @@ def test_create_dir_exceptions(self, mockexit, mockbox): @mock.patch("PyQt5.QtWidgets.QMessageBox") def test_profile_dialog(self, mockbox): self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: "something@something.org"}}) self._create_user("something", "something@something.org", "something") self.window.mscolab.profile_action.trigger() QtWidgets.QApplication.processEvents() @@ -739,8 +814,7 @@ def _reset_config_file(self): config_file = fs.path.combine(MSUI_CONFIG_PATH, "msui_settings.json") read_config_file(path=config_file) - @mock.patch("mslib.msui.mscolab.QtWidgets.QErrorMessage.showMessage") - def _create_operation(self, path, description, mockbox, category="example"): + def _create_operation(self, path, description, category="example"): self.window.actionAddOperation.trigger() QtWidgets.QApplication.processEvents() self.window.mscolab.add_proj_dialog.path.setText(str(path)) diff --git a/tests/_test_msui/test_mscolab_admin_window.py b/tests/_test_msui/test_mscolab_admin_window.py index 4427d3083..348c17814 100644 --- a/tests/_test_msui/test_mscolab_admin_window.py +++ b/tests/_test_msui/test_mscolab_admin_window.py @@ -25,6 +25,7 @@ limitations under the License. """ import os +import mock import pytest import sys @@ -35,6 +36,7 @@ from mslib.msui import msui from mslib.mscolab.mscolab import handle_db_reset from mslib.mscolab.seed import add_user, get_user, add_operation, add_user_to_operation +from mslib.utils.config import modify_config_file PORTS = list(range(24000, 24500)) @@ -68,9 +70,11 @@ def setup_method(self): QtTest.QTest.qWait(500) self.application = QtWidgets.QApplication(sys.argv) self.window = msui.MSUIMainWindow(mscolab_data_dir=mscolab_settings.MSCOLAB_DATA_DIR) + self.window.create_new_flight_track() self.window.show() # connect and login to mscolab self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(emailid=self.userdata[0], password=self.userdata[2]) # activate operation and open chat window self._activate_operation_at_index(0) @@ -86,6 +90,9 @@ def teardown_method(self): self.window.mscolab.admin_window.close() if self.window.mscolab.conn: self.window.mscolab.conn.disconnect() + with mock.patch("PyQt5.QtWidgets.QMessageBox.warning", return_value=QtWidgets.QMessageBox.Yes): + self.window.close() + QtWidgets.QApplication.processEvents() self.application.quit() QtWidgets.QApplication.processEvents() self.process.terminate() diff --git a/tests/_test_msui/test_mscolab_merge_waypoints.py b/tests/_test_msui/test_mscolab_merge_waypoints.py index 6ac974510..e8afee9fc 100644 --- a/tests/_test_msui/test_mscolab_merge_waypoints.py +++ b/tests/_test_msui/test_mscolab_merge_waypoints.py @@ -53,6 +53,7 @@ def setup_method(self): QtTest.QTest.qWait(500) self.application = QtWidgets.QApplication(sys.argv) self.window = msui.MSUIMainWindow(mscolab_data_dir=mscolab_settings.MSCOLAB_DATA_DIR) + self.window.create_new_flight_track() self.emailid = 'merge@alpha.org' def teardown_method(self): diff --git a/tests/_test_msui/test_mscolab_operation.py b/tests/_test_msui/test_mscolab_operation.py index 5ca3a48dd..ec2b769a9 100644 --- a/tests/_test_msui/test_mscolab_operation.py +++ b/tests/_test_msui/test_mscolab_operation.py @@ -36,6 +36,7 @@ from mslib.msui import msui from mslib.mscolab.mscolab import handle_db_reset from mslib.mscolab.seed import add_user, get_user, add_operation, add_user_to_operation +from mslib.utils.config import modify_config_file PORTS = list(range(22000, 22500)) @@ -63,9 +64,11 @@ def setup_method(self): QtTest.QTest.qWait(500) self.application = QtWidgets.QApplication(sys.argv) self.window = msui.MSUIMainWindow(mscolab_data_dir=mscolab_settings.MSCOLAB_DATA_DIR) + self.window.create_new_flight_track() self.window.show() # connect and login to mscolab self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(self.userdata[0], self.userdata[2]) # activate operation and open chat window self._activate_operation_at_index(0) diff --git a/tests/_test_msui/test_mscolab_version_history.py b/tests/_test_msui/test_mscolab_version_history.py index 01e48739f..a84c61aa7 100644 --- a/tests/_test_msui/test_mscolab_version_history.py +++ b/tests/_test_msui/test_mscolab_version_history.py @@ -36,6 +36,7 @@ from mslib.msui import msui from mslib.mscolab.mscolab import handle_db_reset from mslib.mscolab.seed import add_user, get_user, add_operation, add_user_to_operation +from mslib.utils.config import modify_config_file PORTS = list(range(20000, 20500)) @@ -56,9 +57,11 @@ def setup_method(self): QtTest.QTest.qWait(500) self.application = QtWidgets.QApplication(sys.argv) self.window = msui.MSUIMainWindow(mscolab_data_dir=mscolab_settings.MSCOLAB_DATA_DIR) + self.window.create_new_flight_track() self.window.show() # connect and login to mscolab self._connect_to_mscolab() + modify_config_file({"MSS_auth": {self.url: self.userdata[0]}}) self._login(self.userdata[0], self.userdata[2]) # activate operation and open chat window self._activate_operation_at_index(0) diff --git a/tests/_test_msui/test_msui.py b/tests/_test_msui/test_msui.py index 1aee9207d..cddb9c666 100644 --- a/tests/_test_msui/test_msui.py +++ b/tests/_test_msui/test_msui.py @@ -263,7 +263,7 @@ def test_plugin_saveas(self, save_file): def test_plugin_import(self, open_file): with mock.patch("mslib.msui.msui_mainwindow.config_loader", return_value=self.import_plugins): self.window.add_import_plugins("qt") - with mock.patch("mslib.msui.msui_mainwindow.get_open_filenames", return_value=open_file) as mockopen: + with mock.patch("mslib.msui.msui_mainwindow.get_open_filenames", return_value=[open_file[0]]) as mockopen: assert self.window.listFlightTracks.count() == 1 assert mockopen.call_count == 0 self.window.last_save_directory = ROOT_DIR diff --git a/tests/_test_msui/test_satellite_dockwidget.py b/tests/_test_msui/test_satellite_dockwidget.py index e558fcd1c..8ebbf7c84 100644 --- a/tests/_test_msui/test_satellite_dockwidget.py +++ b/tests/_test_msui/test_satellite_dockwidget.py @@ -61,7 +61,13 @@ def test_load(self): assert self.view.plot_satellite_overpass.call_count == 2 self.view.reset_mock() - def test_load_no_file(self): + @mock.patch("PyQt5.QtWidgets.QMessageBox.critical") + def test_load_no_file(self, mockbox): QtTest.QTest.mouseClick(self.window.btLoadFile, QtCore.Qt.LeftButton) QtWidgets.QApplication.processEvents() assert self.window.cbSatelliteOverpasses.count() == 0 + mockbox.assert_called_once_with( + self.window, + "Satellite Overpass Tool", + "ERROR:\n<class 'fs.errors.FileExpected'>\npath '' should be a file", + ) diff --git a/tests/_test_utils/test_airdata.py b/tests/_test_utils/test_airdata.py index 10299f380..1545b15ab 100644 --- a/tests/_test_utils/test_airdata.py +++ b/tests/_test_utils/test_airdata.py @@ -204,10 +204,12 @@ def test_get_airspaces(mockbox): @mock.patch("mslib.utils.airdata.download_progress", _download_incomplete_airspace) +@mock.patch("PyQt5.QtWidgets.QMessageBox.information") @mock.patch("PyQt5.QtWidgets.QMessageBox.question", return_value=QtWidgets.QMessageBox.Yes) -def test_get_airspaces_missing_data(mockbox): +def test_get_airspaces_missing_data(mockbox, infobox): """ We use a test file without the need for downloading to check handling """ # update_airspace would only update after 30 days _cleanup_test_files() airspaces = get_airspaces(countries=["bg"]) assert airspaces == [] + infobox.assert_called_once_with(None, 'No Airspaces data in file:', 'bg_asp.xml') diff --git a/tests/utils.py b/tests/utils.py index 895dca650..cbd107287 100644 --- a/tests/utils.py +++ b/tests/utils.py @@ -41,6 +41,7 @@ from mslib.mscolab.conf import mscolab_settings from mslib.mscolab.server import APP, initialize_managers, start_server from mslib.mscolab.mscolab import handle_db_init +from mslib.utils.config import modify_config_file def callback_ok_image(status, response_headers): @@ -198,6 +199,9 @@ def mscolab_start_server(all_ports, mscolab_settings=mscolab_settings, timeout=1 url = f"http://localhost:{port}" + # Update mscolab URL to avoid "Update Server List" message boxes + modify_config_file({"default_MSCOLAB": [url]}) + _app = APP _app.config['SQLALCHEMY_DATABASE_URI'] = mscolab_settings.SQLALCHEMY_DB_URI _app.config['MSCOLAB_DATA_DIR'] = mscolab_settings.MSCOLAB_DATA_DIR
What to do with "UserWarning: An unhandled message box popped up during your test!"? There are many of these warnings in the CI logs basically spamming the output and drowning out other more interesting warnings. These warnings are originating from https://github.com/Open-MSS/MSS/blob/1327ede1dbe3f4eb26bf3889934fa76c74fb428b/conftest.py#L227. I am not sure why this warning was introduced in the first place, but in my opinion there are two options to handle this better: 1. If it is an issue that there are unhandled message boxes, then this should fail the respective test. 2. If it is not an issue, then we shouldn't flood the output with this warning. Since I am not too familiar with the internals of the test suite I'd like to discuss how to make this better. What to do with "UserWarning: An unhandled message box popped up during your test!"? There are many of these warnings in the CI logs basically spamming the output and drowning out other more interesting warnings. These warnings are originating from https://github.com/Open-MSS/MSS/blob/1327ede1dbe3f4eb26bf3889934fa76c74fb428b/conftest.py#L227. I am not sure why this warning was introduced in the first place, but in my opinion there are two options to handle this better: 1. If it is an issue that there are unhandled message boxes, then this should fail the respective test. 2. If it is not an issue, then we shouldn't flood the output with this warning. Since I am not too familiar with the internals of the test suite I'd like to discuss how to make this better.
see here, the warning comes from the fixture https://github.com/Open-MSS/MSS/blob/develop/conftest.py#L214 tests better should fail instead of hiding one cause, some of the tests showing that have to do a second turn. Sometimes functionality gets added but not the test improved e.g. ``` call(<mslib.msui.mscolab.MSColab_ConnectDialog object at 0x7fc4c26e3e20>, 'Update Credentials', 'You are using new credentials. Should your settings file be updated with the new credentials?', <PyQt5.QtWidgets.QMessageBox.StandardButtons object at 0x7fc4bfefa880>, 65536), call().__eq__(16384)] 688 warnings.warn(f"An unhandled message box popped up during your test!\n{summary}") ```` If that had failed then we would know better. see here, the warning comes from the fixture https://github.com/Open-MSS/MSS/blob/develop/conftest.py#L214 tests better should fail instead of hiding one cause, some of the tests showing that have to do a second turn. Sometimes functionality gets added but not the test improved e.g. ``` call(<mslib.msui.mscolab.MSColab_ConnectDialog object at 0x7fc4c26e3e20>, 'Update Credentials', 'You are using new credentials. Should your settings file be updated with the new credentials?', <PyQt5.QtWidgets.QMessageBox.StandardButtons object at 0x7fc4bfefa880>, 65536), call().__eq__(16384)] 688 warnings.warn(f"An unhandled message box popped up during your test!\n{summary}") ```` If that had failed then we would know better.
1,693,392,061,000
[]
Performance Issue
[ "mslib/msui/mscolab.py:MSUIMscolab.add_operation", "mslib/msui/mscolab.py:MSUIMscolab.change_category_handler", "mslib/msui/mscolab.py:MSUIMscolab.change_description_handler", "mslib/msui/mscolab.py:MSUIMscolab.rename_operation_handler", "mslib/msui/mscolab.py:MSUIMscolab.logout", "mslib/msui/socket_control.py:ConnectionManager.disconnect", "mslib/utils/airdata.py:get_airspaces" ]
[]
End of preview. Expand in Data Studio

LOC-BENCH: A Benchmark for Code Localization

LOC-BENCH is a dataset specifically designed for evaluating code localization methods in software repositories. LOC-BENCH provides a diverse set of issues, including bug reports, feature requests, security vulnerabilities, and performance optimizations.

Note: This is the dataset that was used in paper.

Actually, this dataset is the first version; We later released a refined version, czlll/Loc-Bench, with improved data quality by filtering out examples that do not modify any functions. We recommend using the refined dataset to evaluate code localization performance.

πŸ”§ How to Use

You can easily load LOC-BENCH using Hugging Face's datasets library:

from datasets import load_dataset

dataset = load_dataset("czlll/Loc-Bench_V0", split='test')

πŸ“„ Citation

If you use LOC-BENCH in your research, please cite our paper:

@article{chen2025locagent,
title={LocAgent: Graph-Guided LLM Agents for Code Localization},
author={Chen, Zhaoling and Tang,Xiangru and Deng,Gangda and Wu,Fang and Wu,Jialong and Jiang,Zhiwei and Prasanna,Viktor and Cohan,Arman and Wang,Xingyao},
journal={arXiv preprint arXiv:2503.09089},
year={2025}
}
Downloads last month
68