repo
stringlengths 5
53
| instance_id
stringlengths 11
56
| base_commit
stringlengths 40
40
| patch
stringlengths 339
56.6k
| test_patch
stringlengths 0
895k
| problem_statement
stringlengths 27
55.6k
| hints_text
stringlengths 0
72k
| created_at
int64 1,447B
1,739B
| labels
sequencelengths 0
7
⌀ | category
stringclasses 4
values | edit_functions
sequencelengths 1
10
| added_functions
sequencelengths 0
32
|
---|---|---|---|---|---|---|---|---|---|---|---|
flet-dev/flet | flet-dev__flet-4384 | 5fb877b3a3f886f3475cd8ebca1cee52472d0ef7 | diff --git a/sdk/python/packages/flet/src/flet/core/icon.py b/sdk/python/packages/flet/src/flet/core/icon.py
index 8944f28bf..5af67aa7b 100644
--- a/sdk/python/packages/flet/src/flet/core/icon.py
+++ b/sdk/python/packages/flet/src/flet/core/icon.py
@@ -130,6 +130,7 @@ def _get_control_name(self):
return "icon"
def before_update(self):
+ super().before_update()
self._set_attr_json("shadows", self.__shadows)
# name
| Icon rotation doesn't work with flet-0.25.0.dev3711
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
Icon rotation doesn't work anymore with flet 0.25
IconButton and other controls are OK
### Code sample
<details open><summary>Code</summary>
flet 0.24.1 :
```python
import flet as ft
import math
def main(page: ft.Page):
page.add(
ft.Icon(
name=ft.icons.FAVORITE,
),
ft.Icon(
name=ft.icons.FAVORITE,
size=60,
rotate=ft.Rotate(0.5*math.pi)
),
)
ft.app(target=main)
```
flet-0.25.0.dev3711 :
```python
import flet as ft
import math
def main(page: ft.Page):
page.add(
ft.Icon(
name=ft.Icons.FAVORITE,
),
ft.Icon(
name=ft.Icons.FAVORITE,
size=60,
rotate=ft.Rotate(0.5*math.pi)
),
)
ft.app(target=main)
```
</details>
### To reproduce
Nothing particular :
With the same code, icon is rotated with flet 0.24.1 and lower, not rotated with 0.25
### Expected behavior
Icon rotate..
### Screenshots / Videos
<details open>
<summary>Captures</summary>
0.24.1 :

flet-0.25.0.dev3711

</details>
### Operating System
Windows
### Operating system details
windows 11 24H2
### Flet version
flet-0.25.0.dev3711
### Regression
Yes, it used to work in a previous Flet version (please specify the version in additional details)
### Suggestions
_No response_
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Additional details
_No response_
| There is a lot of issues we are facing after 0.24.1 | 1,731,816,481,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/icon.py:Icon.before_update"
] | [] |
|
flet-dev/flet | flet-dev__flet-4373 | 5fb877b3a3f886f3475cd8ebca1cee52472d0ef7 | diff --git a/sdk/python/packages/flet/src/flet/core/markdown.py b/sdk/python/packages/flet/src/flet/core/markdown.py
index 118e4e1c4..77a761c88 100644
--- a/sdk/python/packages/flet/src/flet/core/markdown.py
+++ b/sdk/python/packages/flet/src/flet/core/markdown.py
@@ -400,7 +400,12 @@ def before_update(self):
self._set_attr_json("codeStyle", self.__code_style)
self._set_attr_json("codeStyleSheet", self.__code_style_sheet)
self._set_attr_json("mdStyleSheet", self.__md_style_sheet)
- self._set_attr_json("codeTheme", self.__code_theme)
+ self._set_attr_json(
+ "codeTheme",
+ self.__code_theme.value
+ if isinstance(self.__code_theme, MarkdownCodeTheme)
+ else self.__code_theme,
+ )
def _get_children(self):
if self.__img_error_content is not None:
@@ -483,11 +488,13 @@ def md_style_sheet(self, value: Optional[MarkdownStyleSheet]):
# code_theme
@property
- def code_theme(self) -> Optional[MarkdownCodeTheme]:
+ def code_theme(self) -> Optional[Union[MarkdownCodeTheme, MarkdownCustomCodeTheme]]:
return self.__code_theme
@code_theme.setter
- def code_theme(self, value: Optional[MarkdownCodeTheme]):
+ def code_theme(
+ self, value: Optional[Union[MarkdownCodeTheme, MarkdownCustomCodeTheme]]
+ ):
self.__code_theme = value
# code_style
| Regression in `Markdown.code_theme` when using `MarkdownCodeTheme` enum
A custom theme works great although the only issue I faced was setting `code_theme` with `ft.MarkdownCodeTheme.ATOM_ONE_DARK` or any other value but **only** using `ft.MarkdownTheme` class the error it throws is:
# Code
```python
import flet as ft
data = """
```python
class MyClass(object):
def __init__(self):
pass
def greet(self):
print("Hello World")
```
"""
def main(page: ft.Page):
page.add(
ft.Markdown(
value=data,
code_theme=ft.MarkdownCodeTheme.ATOM_ONE_DARK,
extension_set=ft.MarkdownExtensionSet.GITHUB_WEB,
)
)
ft.app(main)
```
<details>
<summary>Error</summary>
```bash
(.venv) PS D:\CodingFolder\myCodingFilesPy\Flet\leraning> flet run .\theme\ --web
http://127.0.0.1:54949
D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\websockets\legacy\server.py:1185: DeprecationWarning: remove second argument of ws_handler
warnings.warn("remove second argument of ws_handler", DeprecationWarning)
Unhandled error processing page session 49apGh5JlJ9Wkk0V: Traceback (most recent call last):
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet_web\fastapi\flet_app.py", line 141, in __on_session_created
await asyncio.get_running_loop().run_in_executor(
File "C:\ProgramData\anaconda3\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\theme\main.py", line 12, in main
page.add(
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\page.py", line 737, in add
r = self.__update(self)
^^^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\page.py", line 835, in __update
commands, added_controls, removed_controls = self.__prepare_update(*controls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\page.py", line 851, in __prepare_update
control.build_update_commands(
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 497, in build_update_commands
innerCmds = ctrl._build_add_commands(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 562, in _build_add_commands
childCmd = control._build_add_commands(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 551, in _build_add_commands
command = self._build_command(False)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 578, in _build_command
self.before_update()
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\markdown.py", line 403, in before_update
self._set_attr_json("codeTheme", self.__code_theme)
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 189, in _set_attr_json
nv = self._convert_attr_json(
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\control.py", line 197, in _convert_attr_json
json.dumps(value, cls=EmbedJsonEncoder, separators=(",", ":"))
File "C:\ProgramData\anaconda3\Lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\embed_json_encoder.py", line 59, in encode
return super().encode(self._convert_enums(o))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\json\encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\json\encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "D:\CodingFolder\myCodingFilesPy\Flet\leraning\.venv\Lib\site-packages\flet\core\embed_json_encoder.py", line 49, in default
obj_as_dict = self._convert_enums(obj.__dict__)
^^^^^^^^^^^^
AttributeError: 'mappingproxy' object has no attribute '__dict__'
```
</details>

### I think there's something wrong with `ft.MarkdownTheme` class because when I used `"atom-one-dark"` it worked. When I run the same code but with `flet` version, `0.24.1` it worked as expected. Kindly raise issue regarding this.
_Originally posted by @tanmay-bhatgare in https://github.com/flet-dev/flet/issues/4342#issuecomment-2476781933_
| 1,731,605,637,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/markdown.py:Markdown.before_update",
"sdk/python/packages/flet/src/flet/core/markdown.py:Markdown.code_theme"
] | [] |
||
flet-dev/flet | flet-dev__flet-4340 | 0f7b14b787eb4249b93e2abd5da23cc953b1e091 | diff --git a/sdk/python/packages/flet/src/flet/core/colors.py b/sdk/python/packages/flet/src/flet/core/colors.py
index 0fb695878..66b4293e8 100644
--- a/sdk/python/packages/flet/src/flet/core/colors.py
+++ b/sdk/python/packages/flet/src/flet/core/colors.py
@@ -37,9 +37,12 @@
import random
from enum import Enum, EnumMeta
-from typing import Dict, List, Optional, Union
+from typing import TYPE_CHECKING, Dict, List, Optional, Union
from warnings import warn
+if TYPE_CHECKING:
+ from flet.core.types import ColorValue
+
from flet.utils import deprecated
@@ -56,9 +59,16 @@ def __getattribute__(self, item):
class colors(str, Enum, metaclass=ColorsDeprecated):
- def with_opacity(self, opacity: Union[int, float]) -> str:
+ @staticmethod
+ @deprecated(
+ reason="Use Colors.with_opacity() method instead.",
+ version="0.25.0",
+ delete_version="0.28.0",
+ )
+ def with_opacity(opacity: Union[int, float], color: "ColorValue") -> str:
assert 0 <= opacity <= 1, "opacity must be between 0 and 1"
- return f"{self.value},{opacity}"
+ color_str = color.value if isinstance(color, Enum) else color
+ return f"{color_str},{opacity}"
@staticmethod
def random():
@@ -416,21 +426,11 @@ def random_color():
class Colors(str, Enum):
- def with_opacity(self, opacity: Union[int, float]) -> str:
- """
- Returns the color with the specified opacity.
-
- Args:
- opacity: The opacity value, which must be between 0 and 1.
-
- Returns:
- A string representing the color value with the specified opacity appended.
-
- Raises:
- AssertionError: If the opacity is not between 0 and 1 (inclusive).
- """
+ @staticmethod
+ def with_opacity(opacity: Union[int, float], color: "ColorValue") -> str:
assert 0 <= opacity <= 1, "opacity must be between 0 and 1"
- return f"{self.value},{opacity}"
+ color_str = color.value if isinstance(color, Enum) else color
+ return f"{color_str},{opacity}"
@staticmethod
def random(
| Using `ft.colors.with_opacity` returns exception, should be warning
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
This code used to work before:
```
tooltip_bgcolor=ft.Colors.with_opacity(0.5, ft.Colors.GREY_300)
```
Now it returns exception:
ERROR:asyncio:Future exception was never retrieved
future: <Future finished exception=TypeError("'<=' not supported between instances of 'int' and 'Colors'")>
Traceback (most recent call last):
File "/Users/inesa/.pyenv/versions/3.12.6/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/inesa/Library/Caches/pypoetry/virtualenvs/controls-gallery-WumZyC-d-py3.12/lib/python3.12/site-packages/flet/core/page.py", line 944, in wrapper
handler(*args)
File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/main.py", line 36, in route_change
gallery_view.display_control_examples(route_list[0], route_list[1])
File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/components/gallery_view.py", line 30, in display_control_examples
self.examples_view.display(
File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/components/examples_view.py", line 55, in display
content=example.example(),
^^^^^^^^^^^^^^^^^
File "/Users/inesa/projects/flet-dev/examples/python/apps/controls-gallery/examples/charts/barchart/01_barchart_1.py", line 87, in example
tooltip_bgcolor=ft.colors.with_opacity(0.5, ft.Colors.GREY_300),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/inesa/Library/Caches/pypoetry/virtualenvs/controls-gallery-WumZyC-d-py3.12/lib/python3.12/site-packages/flet/core/colors.py", line 60, in with_opacity
assert 0 <= opacity <= 1, "opacity must be between 0 and 1"
### Code sample
<details open><summary>Code</summary>
```python
import flet as ft
def main(page: ft.Page):
chart = ft.BarChart(
bar_groups=[
ft.BarChartGroup(
x=0,
bar_rods=[
ft.BarChartRod(
from_y=0,
to_y=40,
width=40,
color=ft.colors.AMBER,
tooltip="Apple",
border_radius=0,
),
],
),
ft.BarChartGroup(
x=1,
bar_rods=[
ft.BarChartRod(
from_y=0,
to_y=100,
width=40,
color=ft.colors.BLUE,
tooltip="Blueberry",
border_radius=0,
),
],
),
ft.BarChartGroup(
x=2,
bar_rods=[
ft.BarChartRod(
from_y=0,
to_y=30,
width=40,
color=ft.colors.RED,
tooltip="Cherry",
border_radius=0,
),
],
),
ft.BarChartGroup(
x=3,
bar_rods=[
ft.BarChartRod(
from_y=0,
to_y=60,
width=40,
color=ft.colors.ORANGE,
tooltip="Orange",
border_radius=0,
),
],
),
],
border=ft.border.all(1, ft.colors.GREY_400),
left_axis=ft.ChartAxis(
labels_size=40, title=ft.Text("Fruit supply"), title_size=40
),
bottom_axis=ft.ChartAxis(
labels=[
ft.ChartAxisLabel(
value=0, label=ft.Container(ft.Text("Apple"), padding=10)
),
ft.ChartAxisLabel(
value=1, label=ft.Container(ft.Text("Blueberry"), padding=10)
),
ft.ChartAxisLabel(
value=2, label=ft.Container(ft.Text("Cherry"), padding=10)
),
ft.ChartAxisLabel(
value=3, label=ft.Container(ft.Text("Orange"), padding=10)
),
],
labels_size=40,
),
horizontal_grid_lines=ft.ChartGridLines(
color=ft.colors.GREY_300, width=1, dash_pattern=[3, 3]
),
tooltip_bgcolor=ft.colors.with_opacity(0.5, ft.colors.GREY_300),
max_y=110,
interactive=True,
expand=True,
)
page.add(chart)
ft.app(main)
```
</details>
### To reproduce
Run the repro code -> BarChart is not displayed, error in in log:
Unhandled error processing page session : Traceback (most recent call last):
File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/app.py", line 247, in on_session_created
await asyncio.get_running_loop().run_in_executor(
File "/Users/inesa/.pyenv/versions/3.12.6/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/inesa/projects/flet-dev/flet/sdk/python/playground/bar_chart_test.py", line 84, in main
tooltip_bgcolor=ft.colors.with_opacity(0.5, ft.colors.GREY_300),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/inesa/projects/flet-dev/flet/sdk/python/packages/flet/src/flet/core/colors.py", line 60, in with_opacity
assert 0 <= opacity <= 1, "opacity must be between 0 and 1"
^^^^^^^^^^^^^^^^^
TypeError: '<=' not supported between instances of 'int' and 'colors`
### Expected behavior
Expected to see a warning
### Screenshots / Videos
<details open>
<summary>Captures</summary>
[Upload media here]
</details>
### Operating System
macOS
### Operating system details
15
### Flet version
0.25.0.dev3679
### Regression
Yes, it used to work in a previous Flet version (please specify the version in additional details)
### Suggestions
_No response_
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Additional details
_No response_
| 1,731,103,482,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/colors.py:colors.with_opacity",
"sdk/python/packages/flet/src/flet/core/colors.py:colors",
"sdk/python/packages/flet/src/flet/core/colors.py:Colors.with_opacity",
"sdk/python/packages/flet/src/flet/core/colors.py:Colors"
] | [
"sdk/python/packages/flet/src/flet/core/colors.py:colors"
] |
||
flet-dev/flet | flet-dev__flet-4314 | 3b7241e3f5024ee47f3bcba3092a9e71e56bfe42 | diff --git a/sdk/python/packages/flet/src/flet/core/segmented_button.py b/sdk/python/packages/flet/src/flet/core/segmented_button.py
index 01f303af6..7c1934834 100644
--- a/sdk/python/packages/flet/src/flet/core/segmented_button.py
+++ b/sdk/python/packages/flet/src/flet/core/segmented_button.py
@@ -203,12 +203,11 @@ def before_update(self):
assert (
len(self.selected) < 2 or self.allow_multiple_selection
), "allow_multiple_selection must be True for selected to have more than one item"
- if self.__style is None:
- self.__style = ButtonStyle()
- self.__style.side = self._wrap_attr_dict(self.__style.side)
- self.__style.shape = self._wrap_attr_dict(self.__style.shape)
- self.__style.padding = self._wrap_attr_dict(self.__style.padding)
- self._set_attr_json("style", self.__style)
+ style = self.__style or ButtonStyle()
+ style.side = self._wrap_attr_dict(style.side)
+ style.shape = self._wrap_attr_dict(style.shape)
+ style.padding = self._wrap_attr_dict(style.padding)
+ self._set_attr_json("style", style)
def _get_children(self):
for segment in self.segments:
| user customized style for SegmentedButton not wrapped
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
user cutomized style is not working in SegmentButton

### Code sample
<details open><summary>Code</summary>
```python
import flet as ft
def main(page: ft.Page):
page.window.width = 800
page.window.height = 500
page.window.alignment = ft.alignment.center
def handle_change(e):
print("on_change data : " + str(e.data))
def on_error(e) -> None:
print("error: ", e.data)
page.on_error = on_error
style = ft.ButtonStyle(shape=ft.RoundedRectangleBorder(radius=5))
page.add(
ft.SegmentedButton(
on_change=handle_change,
selected_icon=ft.Icon(ft.icons.ONETWOTHREE),
selected={"1", "4"},
allow_multiple_selection=True,
style=style,
segments=[
ft.Segment(
value="1",
label=ft.Text("1"),
icon=ft.Icon(ft.icons.LOOKS_ONE),
),
ft.Segment(
value="2",
label=ft.Text("2"),
icon=ft.Icon(ft.icons.LOOKS_TWO),
),
ft.Segment(
value="3",
label=ft.Text("3"),
icon=ft.Icon(ft.icons.LOOKS_3),
),
ft.Segment(
value="4",
label=ft.Text("4"),
icon=ft.Icon(ft.icons.LOOKS_4),
),
],
)
)
ft.app(main)
```
</details>
### To reproduce
pass customized style to SegmentedButton (style has customized attribute value, e.g. `style = ft.ButtonStyle(shape=ft.RoundedRectangleBorder(radius=5))`)
### Expected behavior
SegmentedButton has RoundedRectangleBorder
### Screenshots / Videos
<details open>
<summary>Captures</summary>
[Upload media here]

</details>
### Operating System
Windows
### Operating system details
Windows 11 24H2
### Flet version
0.24.1
### Regression
No, it isn't
### Suggestions
_No response_
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Additional details
_No response_
| 1,730,865,480,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/segmented_button.py:SegmentedButton.before_update"
] | [] |
||
explodinggradients/ragas | explodinggradients__ragas-1627 | c3a183167b375bf5e22c8c8959d212e6b58103be | diff --git a/src/ragas/metrics/_noise_sensitivity.py b/src/ragas/metrics/_noise_sensitivity.py
index a6a903b68..4074d0edc 100644
--- a/src/ragas/metrics/_noise_sensitivity.py
+++ b/src/ragas/metrics/_noise_sensitivity.py
@@ -101,7 +101,6 @@ async def _decompose_answer_into_statements(
sentences_with_index = {
i: sentence
for i, sentence in enumerate(sentences)
- if sentence.strip().endswith(".")
}
statements_simplified = await self.statement_prompt.generate(
| In NoiseSensitivity, the `.` should be deleted as the split condition.
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
After the sentence is split, i think the `.` should no longer be used as a split condition. Many other languages do not use `.` as a split condition. Just follow the result of `sentence_segmenter`.
**Code Examples**
https://github.com/explodinggradients/ragas/blob/main/src/ragas/metrics/_noise_sensitivity.py#L104C13-L104C46
**Additional context**
I will provide a pr.
| 1,730,804,031,000 | null | Feature Request | [
"src/ragas/metrics/_noise_sensitivity.py:NoiseSensitivity._decompose_answer_into_statements"
] | [] |
||
scikit-learn/scikit-learn | scikit-learn__scikit-learn-30241 | 551d56c254197c4b6ad63974d749824ed2c7bc58 | diff --git a/sklearn/utils/estimator_checks.py b/sklearn/utils/estimator_checks.py
index f5d542d9a59fc..da1300817b148 100644
--- a/sklearn/utils/estimator_checks.py
+++ b/sklearn/utils/estimator_checks.py
@@ -2076,11 +2076,11 @@ def check_regressor_multioutput(name, estimator):
assert y_pred.dtype == np.dtype("float64"), (
"Multioutput predictions by a regressor are expected to be"
- " floating-point precision. Got {} instead".format(y_pred.dtype)
+ f" floating-point precision. Got {y_pred.dtype} instead"
)
assert y_pred.shape == y.shape, (
"The shape of the prediction for multioutput data is incorrect."
- " Expected {}, got {}."
+ f" Expected {y_pred.shape}, got {y.shape}."
)
| Missing format string arguments
This assertion error string is not properly formatted as the 2 format arguments `y_pred.shape` and `y.shape` are missing:
https://github.com/scikit-learn/scikit-learn/blob/551d56c254197c4b6ad63974d749824ed2c7bc58/sklearn/utils/estimator_checks.py#L2139
```python
assert y_pred.shape == y.shape, (
"The shape of the prediction for multioutput data is incorrect."
" Expected {}, got {}."
)
```
should become
```python
assert y_pred.shape == y.shape, (
"The shape of the prediction for multioutput data is incorrect."
" Expected {}, got {}.".format(y_pred.shape, y.shape)
)
```
| Please feel free to directly submit a PR with the fix in the future in such cases :) | 1,731,059,196,000 | null | Bug Report | [
"sklearn/utils/estimator_checks.py:check_regressor_multioutput"
] | [] |
|
Lightning-AI/pytorch-lightning | Lightning-AI__pytorch-lightning-20484 | 601c0608059ed33ac617a57bb122e17b88c35c9a | diff --git a/src/lightning/pytorch/loops/prediction_loop.py b/src/lightning/pytorch/loops/prediction_loop.py
index 7044ccea87a7f..dcfd873a28b4b 100644
--- a/src/lightning/pytorch/loops/prediction_loop.py
+++ b/src/lightning/pytorch/loops/prediction_loop.py
@@ -233,8 +233,9 @@ def _predict_step(
self.batch_progress.increment_ready()
- if not using_dataloader_iter:
- any_on_epoch = self._store_data_for_prediction_writer(batch_idx, dataloader_idx)
+ any_on_epoch = (
+ self._store_data_for_prediction_writer(batch_idx, dataloader_idx) if not using_dataloader_iter else False
+ )
# the `_step` methods don't take a batch_idx when `dataloader_iter` is used, but all other hooks still do,
# so we need different kwargs
| UnboundLocalError: local variable 'any_on_epoch' referenced before assignment in prediction loop
### Bug description
`UnboundLocalError` raises when using the predict method with `return_predictions=False`.
This is due to `any_on_epoch` [not being defined](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L236) if `data_fetcher` is [an instance](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L229) of `_DataLoaderIterDataFetcher`.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
UnboundLocalError: local variable 'any_on_epoch' referenced before assignment
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
| nice catch
> This is due to any_on_epoch [not being defined](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L236) if data_fetcher is [not an instance](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L229) of _DataLoaderIterDataFetcher.
it's actually the other way around, it errors out when it *is* an instance of `_DataLoaderIterDataFetcher`
the solution is to replace lines 236 and 237:
https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L236-L237
with
```python
any_on_epoch = self._store_data_for_prediction_writer(batch_idx, dataloader_idx) if not using_dataloader_iter else False
```
Would you like to volunteer for a quick PR? | 1,733,763,932,000 | null | Bug Report | [
"src/lightning/pytorch/loops/prediction_loop.py:_PredictionLoop._predict_step"
] | [] |
|
Lightning-AI/pytorch-lightning | Lightning-AI__pytorch-lightning-20420 | 20d19d2f5728f7049272f2db77a9748ff4cf5ccd | diff --git a/examples/fabric/build_your_own_trainer/run.py b/examples/fabric/build_your_own_trainer/run.py
index 01044f5d94fa8..c0c2ff28ddc41 100644
--- a/examples/fabric/build_your_own_trainer/run.py
+++ b/examples/fabric/build_your_own_trainer/run.py
@@ -41,7 +41,8 @@ def training_step(self, batch, batch_idx: int):
def configure_optimizers(self):
optim = torch.optim.Adam(self.parameters(), lr=1e-4)
- return optim, {
+ return {
+ "optimizer": optim,
"scheduler": torch.optim.lr_scheduler.ReduceLROnPlateau(optim, mode="max", verbose=True),
"monitor": "val_accuracy",
"interval": "epoch",
| OptimizerLRScheduler typing does not fit examples
### Bug description
The return type of `LightningModule.configure_optimizers()` is `OptimizerLRScheduler`, see the [source code](https://github.com/Lightning-AI/pytorch-lightning/blob/e6c26d2d22fc4678b2cf47d57697ebae68b09529/src/lightning/pytorch/core/module.py#L954). However, the examples give return types not fitting this return type. See e.g. the example [here](https://github.com/Lightning-AI/pytorch-lightning/blob/e6c26d2d22fc4678b2cf47d57697ebae68b09529/examples/fabric/build_your_own_trainer/run.py#L42-L49).
Furthermore, the `OptimizerLRScheduler` is only used as a return type, but I don't see where it is actually used, i.e. the other part of the typed interface. A [search for it](https://github.com/search?q=repo%3ALightning-AI%2Fpytorch-lightning%20OptimizerLRScheduler&type=code) does not reveal it.
### What version are you seeing the problem on?
2.3.1
### How to reproduce the bug
Just run mypy on an example, e.g. https://github.com/Lightning-AI/pytorch-lightning/blob/e6c26d2d22fc4678b2cf47d57697ebae68b09529/examples/fabric/build_your_own_trainer/run.py#L42-L49.
### Error messages and logs
Running mypy on this causes a bug:
```
Error: Incompatible return value type (got "tuple[Adam, dict[str, object]]", expected "Optimizer | Sequence[Optimizer] | tuple[Sequence[Optimizer], Sequence[LRScheduler | ReduceLROnPlateau | LRSchedulerConfig]] | OptimizerLRSchedulerConfig | Sequence[OptimizerLRSchedulerConfig] | None")
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: None
* Lightning:
- lightning-utilities: 0.11.3.post0
- pytorch-lightning: 2.3.1
- torch: 2.3.1
- torchmetrics: 0.8.0
- torchvision: 0.18.1
* Packages:
- absl-py: 2.1.0
- aenum: 3.1.15
- aiohttp: 3.9.5
- aiosignal: 1.3.1
- alabaster: 0.7.16
- albumentations: 1.3.1
- antlr4-python3-runtime: 4.9.3
- arabic-reshaper: 3.0.0
- asn1crypto: 1.5.1
- async-timeout: 4.0.3
- attrs: 23.2.0
- babel: 2.15.0
- boto3: 1.34.137
- botocore: 1.34.137
- build: 1.2.1
- certifi: 2024.6.2
- cffi: 1.16.0
- charset-normalizer: 3.3.2
- click: 8.1.7
- coloredlogs: 15.0.1
- contourpy: 1.2.1
- coverage: 5.3.1
- cryptography: 42.0.8
- cssselect2: 0.7.0
- cycler: 0.12.1
- data-gradients: 0.3.2
- deprecated: 1.2.14
- docutils: 0.17.1
- einops: 0.3.2
- exceptiongroup: 1.2.1
- filelock: 3.15.4
- flatbuffers: 24.3.25
- fonttools: 4.53.0
- frozenlist: 1.4.1
- fsspec: 2024.6.1
- future: 1.0.0
- grpcio: 1.64.1
- html5lib: 1.1
- huggingface-hub: 0.23.4
- humanfriendly: 10.0
- hydra-core: 1.3.2
- idna: 3.7
- imagededup: 0.3.1
- imageio: 2.34.2
- imagesize: 1.4.1
- iniconfig: 2.0.0
- jinja2: 3.1.4
- jmespath: 1.0.1
- joblib: 1.4.2
- json-tricks: 3.16.1
- jsonschema: 4.22.0
- jsonschema-specifications: 2023.12.1
- kiwisolver: 1.4.5
- lazy-loader: 0.4
- lightly: 1.5.8
- lightly-train: 0.1.0
- lightly-utils: 0.0.2
- lightning-utilities: 0.11.3.post0
- lxml: 5.2.2
- markdown: 3.6
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib: 3.9.0
- mdurl: 0.1.2
- mpmath: 1.3.0
- multidict: 6.0.5
- mypy: 1.10.1
- mypy-extensions: 1.0.0
- networkx: 3.3
- numpy: 1.23.0
- omegaconf: 2.3.0
- onnx: 1.15.0
- onnxruntime: 1.15.0
- onnxsim: 0.4.36
- opencv-python: 4.10.0.84
- opencv-python-headless: 4.10.0.84
- oscrypto: 1.3.0
- packaging: 24.1
- pandas: 2.2.2
- pillow: 10.4.0
- pip: 24.1.1
- pip-tools: 7.4.1
- platformdirs: 4.2.2
- pluggy: 1.5.0
- protobuf: 3.20.3
- psutil: 6.0.0
- pycparser: 2.22
- pydantic: 1.10.17
- pydeprecate: 0.3.2
- pygments: 2.18.0
- pyhanko: 0.25.0
- pyhanko-certvalidator: 0.26.3
- pyparsing: 3.1.2
- pypdf: 4.2.0
- pypng: 0.20220715.0
- pyproject-hooks: 1.1.0
- pytest: 8.2.2
- pytest-mock: 3.14.0
- python-bidi: 0.4.2
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.3.1
- pytz: 2024.1
- pywavelets: 1.6.0
- pyyaml: 6.0.1
- qrcode: 7.4.2
- qudida: 0.0.4
- rapidfuzz: 3.9.3
- referencing: 0.35.1
- reportlab: 3.6.13
- requests: 2.32.3
- rich: 13.7.1
- rpds-py: 0.18.1
- ruff: 0.5.0
- s3transfer: 0.10.2
- safetensors: 0.4.3
- scikit-image: 0.24.0
- scikit-learn: 1.5.0
- scipy: 1.13.1
- seaborn: 0.13.2
- selftrain: 0.1.0
- setuptools: 70.2.0
- six: 1.16.0
- snowballstemmer: 2.2.0
- sphinx: 4.0.3
- sphinx-rtd-theme: 1.3.0
- sphinxcontrib-applehelp: 1.0.8
- sphinxcontrib-devhelp: 1.0.6
- sphinxcontrib-htmlhelp: 2.0.5
- sphinxcontrib-jquery: 4.1
- sphinxcontrib-jsmath: 1.0.1
- sphinxcontrib-qthelp: 1.0.7
- sphinxcontrib-serializinghtml: 1.1.10
- stringcase: 1.2.0
- super-gradients: 3.7.1
- svglib: 1.5.1
- sympy: 1.12.1
- tensorboard: 2.17.0
- tensorboard-data-server: 0.7.2
- termcolor: 1.1.0
- threadpoolctl: 3.5.0
- tifffile: 2024.6.18
- timm: 1.0.7
- tinycss2: 1.3.0
- tomli: 2.0.1
- torch: 2.3.1
- torchmetrics: 0.8.0
- torchvision: 0.18.1
- tqdm: 4.66.4
- treelib: 1.6.1
- typing-extensions: 4.12.2
- tzdata: 2024.1
- tzlocal: 5.2
- uritools: 4.0.3
- urllib3: 2.2.2
- webencodings: 0.5.1
- werkzeug: 3.0.3
- wheel: 0.43.0
- wrapt: 1.16.0
- xhtml2pdf: 0.2.11
- yarl: 1.9.4
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: arm
- python: 3.10.8
- release: 23.5.0
- version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
### More info
_No response_
| Hey @MalteEbner
The optimizer should be inside the dict in that example. Thank you for noticing it. Would you like to send a PR with this quick fix? Would be much appreciated 😃
> Hey @MalteEbner The optimizer should be inside the dict in that example. Thank you for noticing it. Would you like to send a PR with this quick fix? Would be much appreciated 😃
Unfortunately, the problem is not that the example is wrong. This is just a symptom of an underlying problem. The problem is that the `LightningModule.configure_optimizers() -> OptimizerLRScheduler` and its usage don't fit.
Toe see why, have a look at where `configure_optimizers()` is used, see the source code [here](https://github.com/Lightning-AI/pytorch-lightning/blob/f91349c961103af48091654775248789b6e03bd1/src/lightning/pytorch/core/optimizer.py#L179-L200): The output of it is stored under the variable name `optim_conf` and then passed to the function
```python
def _configure_optimizers(
optim_conf: Union[Dict[str, Any], List, Optimizer, Tuple],
)
```
However, these definition type `OptimizerLRScheduler` and the usage type `Union[Dict[str, Any], List, Optimizer, Tuple] `don't align. To fix this, more is needed
1. Change it to `optim_conf: OptimizerLRScheduler`, so that the usage of `configure_optimizers()` has the same type as its definition.
2. Redefine the `OptimizerLRScheduler` such that it fits the supported types in `_configure_optimizers`.
I'm not seeing any mypy errors regarding this. Which version did you use and what was the command you ran? The version we test with you can see here: https://github.com/Lightning-AI/pytorch-lightning/blob/master/requirements/typing.txt
We typically bump it together when an new torch version comes out (which is soon again). Maybe your issue will show up, but I'm not seeing it locally.
Yes sure, the internal `_configure_optimizers` uses a bit more generic typing. Feel free to update it to be more specific 👍 . The return type of `LightningModule.configure_optimizers()` should not be changed, this looks all good to me still. | 1,731,575,568,000 | null | Bug Report | [
"examples/fabric/build_your_own_trainer/run.py:MNISTModule.configure_optimizers"
] | [] |
|
Lightning-AI/pytorch-lightning | Lightning-AI__pytorch-lightning-20401 | c110f4f3f60c643740f5e3573546abfcb5355315 | diff --git a/src/lightning/pytorch/cli.py b/src/lightning/pytorch/cli.py
index 26af335f7be93..e0de8a24b38f5 100644
--- a/src/lightning/pytorch/cli.py
+++ b/src/lightning/pytorch/cli.py
@@ -389,6 +389,7 @@ def __init__(
self._add_instantiators()
self.before_instantiate_classes()
self.instantiate_classes()
+ self.after_instantiate_classes()
if self.subcommand is not None:
self._run_subcommand(self.subcommand)
@@ -560,6 +561,9 @@ def instantiate_classes(self) -> None:
self._add_configure_optimizers_method_to_model(self.subcommand)
self.trainer = self.instantiate_trainer()
+ def after_instantiate_classes(self) -> None:
+ """Implement to run some code after instantiating the classes."""
+
def instantiate_trainer(self, **kwargs: Any) -> Trainer:
"""Instantiates the trainer.
| Proposal(CLI): after_instantiate_classes hook
### Description & Motivation
Adds a `after_instantiate_classes` hook to the Lightning CLI, called after `self.instantiate_classes()` during the initalization of `LightningCLI`.
### Pitch
While having the Lightning CLI is great, it is not perfect for each use case out-of-the-box. Hence, you included hooks like `before_instantiate_classes` and describe in the docs how to extend the CLI. Problem is, you cannot extend this feature without hacks or substantial copy-pasta.
I think, to further improve the CLI, without adding any complexity, it makes sense to add a `after_instantiate_classes` hook, too.
### Alternatives
1. Hacks
- Extend the Lightning CLI and run the `after_instantiate_classes` function before the `self._run_subcommand` function.
- Problems: it's not intuitive that the function is called there, won't be called if `self.subcommand is None`
2. Copy-Pasta
- Extend the Lightning CLI and replace the original `__init__` with the proposed one.
- Problems: could break with any update, lots of code duplication
### Additional context
_No response_
cc @borda @tchaton @justusschock @awaelchli @mauvilsa
| 1,730,892,823,000 | null | Feature Request | [
"src/lightning/pytorch/cli.py:LightningCLI.__init__"
] | [
"src/lightning/pytorch/cli.py:LightningCLI.after_instantiate_classes"
] |
||
kornia/kornia | kornia__kornia-3084 | b230615f08bb0fff1b3044fc8ccb38f21bd9e817 | diff --git a/kornia/augmentation/container/augment.py b/kornia/augmentation/container/augment.py
index a9cad91924..ebd892bf17 100644
--- a/kornia/augmentation/container/augment.py
+++ b/kornia/augmentation/container/augment.py
@@ -507,11 +507,9 @@ def __call__(
if output_type == "tensor":
self._output_image = _output_image
if isinstance(_output_image, dict):
- self._output_image[original_keys[idx]] = self._detach_tensor_to_cpu(
- _output_image[original_keys[idx]]
- )
+ self._output_image[original_keys[idx]] = _output_image[original_keys[idx]]
else:
- self._output_image[idx] = self._detach_tensor_to_cpu(_output_image[idx])
+ self._output_image[idx] = _output_image[idx]
elif isinstance(_output_image, dict):
self._output_image[original_keys[idx]] = _output_image[original_keys[idx]]
else:
| AugmentationSequential explicitly moves the output to the CPU if data_keys is given
### Describe the bug
With the 0.7.4 release, augmentations on the GPU are not possible anymore because the output of the input tensor is always explicitly moved to the CPU.
The problem is that `_detach_tensor_to_cpu` is called explicitly on every tensor in [augment.py](https://github.com/kornia/kornia/blob/main/kornia/augmentation/container/augment.py#L507).
### Reproduction steps
The augmented version of the input gets moved to the CPU:
```python
import torch
import kornia
input = torch.randn(1, 3, 10, 10, device="cuda")
mask = torch.randn(1, 10, 10, device="cuda")
aug_list = kornia.augmentation.AugmentationSequential(kornia.augmentation.RandomHorizontalFlip(p=1))
for a in aug_list(input, mask, data_keys=["input", "mask"]):
print(a.dtype, a.device)
# torch.float32 cpu
# torch.float32 cuda:0
```
Without `data_keys` and only one input, the output device is as expected:
```python
for a in aug_list(input):
print(a.dtype, a.device)
# torch.float32 cuda:0
```
### Expected behavior
I would expect the augmented version of the input tensor to reside on the GPU.
### Environment
- PyTorch Version (e.g., 1.0): 2.5.1
- OS (e.g., Linux): Ubuntu
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.12
- CUDA/cuDNN version: 12.4
- GPU models and configuration: NVIDIA 4090 RTX
- Any other relevant information:
### Additional context
Reverting back to 0.7.3 fixes the problem.
| i can see that this was touched in https://github.com/kornia/kornia/pull/2979 @ashnair1 @shijianjian @johnnv1 do you recall why we need here `_detach_tensor_to_cpu` instead of something like `_detach_tensor_to_device` ?
It was mainly for passing the tests, due to the randomness handling for CUDA and CPU are different.
I think the bug is because the dict tensors are not handled properly to be move back to CUDA. @ashnair1, can you fix this? We may need to do a patch release after this.
BTW, I think we may need a pre-merge action to run CUDA tests. At least to run it for one-time for each merge. It has happened many times I think that some CUDA errors after all CPU tests passed.
> BTW, I think we may need a pre-merge action to run CUDA tests. At least to run it for one-time for each merge. It has happened many times I think that some CUDA errors after all CPU tests passed.
For that, we need someone to sponsor with access to a machine that has a cuda device.
Will the meta device be helpful? So we mock cuda instead?
https://pytorch.org/docs/stable/meta.html
https://github.com/pytorch/pytorch/issues/61654#issuecomment-879989145
we can try yes
One of Kornia's biggest strengths is fast GPU augmentations. This bug makes 0.7.4 unusable for many including me - are there any updates on a fix?
hey @ashnair1, can you help fix this issue? | 1,733,310,216,000 | null | Bug Report | [
"kornia/augmentation/container/augment.py:AugmentationSequential.__call__"
] | [] |
|
wandb/wandb | wandb__wandb-9011 | 87070417bcde22e45fc3a662b2dfd73d79981ad9 | diff --git a/wandb/apis/public/runs.py b/wandb/apis/public/runs.py
index c605630e079..b29b5109f91 100644
--- a/wandb/apis/public/runs.py
+++ b/wandb/apis/public/runs.py
@@ -236,7 +236,6 @@ def histories(
if not histories:
return pd.DataFrame()
combined_df = pd.concat(histories)
- combined_df.sort_values("run_id", inplace=True)
combined_df.reset_index(drop=True, inplace=True)
# sort columns for consistency
combined_df = combined_df[(sorted(combined_df.columns))]
@@ -262,9 +261,9 @@ def histories(
histories.append(df)
if not histories:
return pl.DataFrame()
- combined_df = pl.concat(histories, how="align")
+ combined_df = pl.concat(histories, how="vertical")
# sort columns for consistency
- combined_df = combined_df.select(sorted(combined_df.columns)).sort("run_id")
+ combined_df = combined_df.select(sorted(combined_df.columns))
return combined_df
| [Bug]: Incorrect order in `runs.histories()`
### Describe the bug
`Runs` have a `order` attribute. However in the `Runs.histories()` method the order is not respected and the `runid` is used to sort instead (see [Relevant code](https://github.com/wandb/wandb/blob/v0.18.7/wandb/apis/public/runs.py#L236). I don't think this is desirable and also is undocumented behavior. I would recommend that the order specified in the `Runs` object is used or to not reorder all so that it is preserved to match the order obtained when iterating over the `runs` object via the iterator interface.
wandb: 0.17.3
python 3.11.8
operating system: Ubuntu 22.04
| Hi @GuillaumeOpenAI! Thank you for writing in!
I have tried out something like this on my end:
```
import wandb
api = wandb.Api()
runs = api.runs("acyrtest/support-team-external-storage")
hist = runs.histories()
for hists in hist:
print(hists)
```
and got this as the result, which is now sorted by run_id, and is sorted by creation date, just like it is in the W&B UI:
```
...
{'_runtime': 0.43318, '_timestamp': 1730926130.456495, 'test': 3, '_step': 3, 'run_id': 'ry7uzzqp'}
{'_runtime': 0.433185, '_timestamp': 1730926130.456522, 'test': 4, '_step': 4, 'run_id': 'ry7uzzqp'}
{'_runtime': 0.43319, '_timestamp': 1730926130.456544, 'test': 5, '_step': 5, 'run_id': 'ry7uzzqp'}
{'_runtime': 0.43322, '_timestamp': 1730926130.4565701, 'test': 6, '_step': 6, 'run_id': 'ry7uzzqp'}
{'_runtime': 0.433225, '_timestamp': 1730926130.456596, 'test': 7, '_step': 7, 'run_id': 'ry7uzzqp'}
{'_runtime': 0.433297, '_timestamp': 1730926130.4566169, 'test': 8, '_step': 8, 'run_id': 'ry7uzzqp'}
{'_runtime': 0.433305, '_timestamp': 1730926130.456641, 'test': 9, '_step': 9, 'run_id': 'ry7uzzqp'}
{'test': 0, '_step': 0, '_runtime': 0.33756, '_timestamp': 1730926195.386603, 'run_id': 'rdav159p'}
{'test': 1, '_step': 1, '_runtime': 0.337583, '_timestamp': 1730926195.386668, 'run_id': 'rdav159p'}
{'test': 2, '_step': 2, '_runtime': 0.337587, '_timestamp': 1730926195.3867, 'run_id': 'rdav159p'}
{'test': 3, '_step': 3, '_runtime': 0.337592, '_timestamp': 1730926195.3867269, 'run_id': 'rdav159p'}
{'test': 4, '_step': 4, '_runtime': 0.337597, '_timestamp': 1730926195.386753, 'run_id': 'rdav159p'}
{'test': 5, '_step': 5, '_runtime': 0.337613, '_timestamp': 1730926195.386775, 'run_id': 'rdav159p'}
{'test': 6, '_step': 6, '_runtime': 0.337673, '_timestamp': 1730926195.3867981, 'run_id': 'rdav159p'}
{'test': 7, '_step': 7, '_runtime': 0.337688, '_timestamp': 1730926195.386824, 'run_id': 'rdav159p'}
{'test': 8, '_step': 8, '_runtime': 0.337705, '_timestamp': 1730926195.386844, 'run_id': 'rdav159p'}
{'test': 9, '_step': 9, '_runtime': 0.337723, '_timestamp': 1730926195.3868642, 'run_id': 'rdav159p'}
{'_timestamp': 1730926993.72419, 'test': 0, '_step': 0, '_runtime': 0.340699, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.72425, 'test': 1, '_step': 1, '_runtime': 0.340727, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.724282, 'test': 2, '_step': 2, '_runtime': 0.340733, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.72431, 'test': 3, '_step': 3, '_runtime': 0.340739, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.7243352, 'test': 4, '_step': 4, '_runtime': 0.340757, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.724362, 'test': 5, '_step': 5, '_runtime': 0.340792, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.7243829, 'test': 6, '_step': 6, '_runtime': 0.34085, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.724406, 'test': 7, '_step': 7, '_runtime': 0.340863, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.724425, 'test': 8, '_step': 8, '_runtime': 0.340896, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730926993.724446, 'test': 9, '_step': 9, '_runtime': 0.340904, 'run_id': 'aesgkd8u'}
{'_timestamp': 1730927371.951988, 'test': 0, '_step': 0, '_runtime': 0.300452, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.952055, 'test': 1, '_step': 1, '_runtime': 0.30048, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.952088, 'test': 2, '_step': 2, '_runtime': 0.300486, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.952116, 'test': 3, '_step': 3, '_runtime': 0.300492, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.9521449, 'test': 4, '_step': 4, '_runtime': 0.300496, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.9521668, 'test': 5, '_step': 5, '_runtime': 0.300524, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.952188, 'test': 6, '_step': 6, '_runtime': 0.300585, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.9522078, 'test': 7, '_step': 7, '_runtime': 0.300596, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.952228, 'test': 8, '_step': 8, '_runtime': 0.300603, 'run_id': 'k633grbi'}
{'_timestamp': 1730927371.952248, 'test': 9, '_step': 9, '_runtime': 0.300609, 'run_id': 'k633grbi'}
...
```
In the example above you can tell the runs are not sorted by run_id but by the timestamp instead.
Are you seeing a different behavior on your end?
My point is that they should be sorted by the order specified when the `runs` object is created.
When I iterate on my runs via a `for` loop or via `histories` I get a different ordering which is problematic as it's hard to match the `metadata` to the history of a run if the two methods return different orders
Gotcha thank you for the quick follow-up.
I have gone ahead and sent the feedback over to our engineering team. I'll let you know when I get an update from them. | 1,733,315,969,000 | null | Bug Report | [
"wandb/apis/public/runs.py:Runs.histories"
] | [] |
|
wandb/wandb | wandb__wandb-8931 | 70058a9e7bf09249d546226192ad3f8b0de04cb7 | diff --git a/wandb/sdk/data_types/video.py b/wandb/sdk/data_types/video.py
index 54e338ef2a5..41310640d6f 100644
--- a/wandb/sdk/data_types/video.py
+++ b/wandb/sdk/data_types/video.py
@@ -138,10 +138,21 @@ def __init__(
self.encode(fps=fps)
def encode(self, fps: int = 4) -> None:
- mpy = util.get_module(
- "moviepy.editor",
- required='wandb.Video requires moviepy when passing raw data. Install with "pip install wandb[media]"',
- )
+ # Try to import ImageSequenceClip from the appropriate MoviePy module
+ mpy = None
+ try:
+ # Attempt to load moviepy.editor for MoviePy < 2.0
+ mpy = util.get_module(
+ "moviepy.editor",
+ required='wandb.Video requires moviepy when passing raw data. Install with "pip install wandb[media]"',
+ )
+ except wandb.Error:
+ # Fallback to moviepy for MoviePy >= 2.0
+ mpy = util.get_module(
+ "moviepy",
+ required='wandb.Video requires moviepy when passing raw data. Install with "pip install wandb[media]"',
+ )
+
tensor = self._prepare_video(self.data)
_, self._height, self._width, self._channels = tensor.shape # type: ignore
| [Bug]: "wandb.Video requires moviepy when passing raw data" Error due to new moviepy version
### Describe the bug
The moviepy package was updated to 2.x and removed the `moviepy.editor` namespace (see [here](https://zulko.github.io/moviepy/getting_started/updating_to_v2.html)), breaking the `Video.encode` method.
Fixing by either migrating to moviepy 2.x or pinning the earlier version.
Version info:
NixOS 24.05
Python 3.11.10
wandb 0.18.7
moviepy 2.1.1
| 1,732,214,869,000 | null | Bug Report | [
"wandb/sdk/data_types/video.py:Video.encode"
] | [] |
||
speechbrain/speechbrain | speechbrain__speechbrain-2760 | 16b6420d4ff23210cfca2e888be8853264e0cb17 | diff --git a/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py b/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py
index 1507f85093..2dad9e1e46 100644
--- a/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py
+++ b/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py
@@ -42,6 +42,8 @@ class WeightedSSLModel(HFTransformersInterface):
freeze : bool (default: True)
If True, the model is frozen. If False, the model will be trained
alongside with the rest of the pipeline.
+ **kwargs : dict
+ Additional arguments to pass to HFTransformersInterface
Example
-------
@@ -52,14 +54,19 @@ class WeightedSSLModel(HFTransformersInterface):
>>> outputs = model(inputs)
"""
- def __init__(self, hub, save_path="", layernorm=False, freeze=False):
- super().__init__(source=hub, save_path=save_path, freeze=freeze)
+ def __init__(
+ self, hub, save_path="", layernorm=False, freeze=False, **kwargs
+ ):
+ super().__init__(
+ source=hub, save_path=save_path, freeze=freeze, **kwargs
+ )
self.model.eval()
+ self.layernorm = layernorm
+ self.freeze = freeze
self.num_layers = self.config.num_hidden_layers + 1
# Initializing the learnable weights
zero_init = torch.cat([torch.zeros(self.num_layers)])
self.weights = torch.nn.Parameter(zero_init, requires_grad=True)
- self.layernorm = layernorm
def forward(self, wav, wav_lens=None):
"""This method outputs a weighted sum of the layer representations of the SSL encoder
@@ -78,21 +85,25 @@ def forward(self, wav, wav_lens=None):
"""
feats = self.model(wav)
- hidden_states = torch.stack(feats.hidden_states, dim=0).detach()
+ if self.freeze:
+ hidden_states = torch.stack(feats.hidden_states, dim=0).detach()
+ else:
+ hidden_states = torch.stack(feats.hidden_states, dim=0)
+
# First dimension should be equal to the number of layers in the hparams
assert (
self.num_layers == hidden_states.shape[0]
), "Num layers not equal to num hidden states"
- norm_weights = torch.nn.functional.softmax(self.weights, dim=-1)
+
# Layernorming the layers representations if asked
if self.layernorm:
- hidden_states = [
- F.layer_norm(t, (t.shape[-1],)) for t in hidden_states
- ]
+ normalized_shape = (hidden_states.size(-1),)
+ hidden_states = F.layer_norm(hidden_states, normalized_shape)
+
# Summing the weighted layers
- weighted_feats = (
- hidden_states * norm_weights[:, None, None, None]
- ).sum(axis=0)
+ norm_weights = F.softmax(self.weights, dim=-1).view(-1, 1, 1, 1)
+ weighted_feats = (hidden_states * norm_weights).sum(axis=0)
+
return weighted_feats
def override_config(self, config):
| Weighted SSL model not unfreezable
### Describe the bug
In our HF Weighted SSL model implementation, we `detach()` the hidden states, meaning weights are not updated.
Relevant code:
https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py#L81
```
hidden_states = torch.stack(feats.hidden_states, dim=0).detach()
```
### Expected behaviour
If passing `freeze=False` I'd expect the weights to get updated.
### To Reproduce
_No response_
### Environment Details
_No response_
### Relevant Log Output
_No response_
### Additional Context
_No response_
| Also if `layernorm=True` then the hidden states are converted to a list which causes a program crash. They should be re-stacked into a tensor. | 1,732,134,555,000 | null | Bug Report | [
"speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py:WeightedSSLModel.__init__",
"speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py:WeightedSSLModel.forward"
] | [] |
|
speechbrain/speechbrain | speechbrain__speechbrain-2742 | c4a424306a58a08dbdf3f86f4c9a32eecf7c94f3 | diff --git a/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py b/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py
index 5c65a49682..90e0d118d3 100644
--- a/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py
+++ b/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py
@@ -107,7 +107,6 @@ def compute_objectives(self, predictions, batch, stage):
target_words = self.tokenizer.batch_decode(
target_words, skip_special_tokens=True
)
-
if hasattr(self.hparams, "normalized_transcripts"):
if hasattr(self.tokenizer, "normalize"):
@@ -237,7 +236,10 @@ def audio_pipeline(wav):
"wrd", "tokens_list", "tokens_bos", "tokens_eos", "tokens"
)
def text_pipeline(wrd):
- if "normalized_transcripts" in hparams:
+ if (
+ "normalized_transcripts" in hparams
+ and hparams["normalized_transcripts"]
+ ):
wrd = tokenizer.normalize(wrd)
yield wrd
tokens_list = tokenizer.encode(wrd, add_special_tokens=False)
| Syntax Bug in Librispeech Whisper Recipe
### Describe the bug
These bugs listed below are related to whisper specifically following this [recipe](recipes/LibriSpeech/ASR/transformer/train_with_whisper.py)
1) line 228 in dataio_prepare, hparams is a dictionary so `hasattr(hparams, "normalized_transcripts")` does not work as intended.
2) line 50. I've gotten some rounding issues for some batches where the .long() rounds down rather than to the nearest int. (i.e. 12.998 gets rounded down to 12, excluding the last token).
### Expected behaviour
1) just change syntax to check keys in dictionary
2) use torch.round() to ensure proper mask computation
### To Reproduce
_No response_
### Environment Details
_No response_
### Relevant Log Output
_No response_
### Additional Context
_No response_
| Hi @matthewkperez, thanks for opening this issue! Would you like to open a PR to fix it? It would be a very welcome contribution :)
Just submitted PR #2737 for this. Cheers! | 1,730,394,415,000 | null | Bug Report | [
"recipes/LibriSpeech/ASR/transformer/train_with_whisper.py:dataio_prepare"
] | [] |
|
mesonbuild/meson | mesonbuild__meson-13881 | f0851c9e4b1760c552f7921e6b6a379b006ba014 | diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py
index cb3552d7f0c1..7b573e4e4d8a 100644
--- a/mesonbuild/backend/ninjabackend.py
+++ b/mesonbuild/backend/ninjabackend.py
@@ -2369,7 +2369,7 @@ def generate_dynamic_link_rules(self) -> None:
options = self._rsp_options(compiler)
self.add_rule(NinjaRule(rule, command, args, description, **options, extra=pool))
- if self.environment.machines[for_machine].is_aix():
+ if self.environment.machines[for_machine].is_aix() and complist:
rule = 'AIX_LINKER{}'.format(self.get_rule_suffix(for_machine))
description = 'Archiving AIX shared library'
cmdlist = compiler.get_command_to_archive_shlib()
| Building meson-python fails in AIX
When I tried to meson-python master branch in AIX using meson, I get the below error
```
Traceback (most recent call last):
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/mesonmain.py", line 193, in run
return options.run_func(options)
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/msetup.py", line 365, in run
app.generate()
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/msetup.py", line 188, in generate
return self._generate(env, capture, vslite_ctx)
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/msetup.py", line 253, in _generate
captured_compile_args = intr.backend.generate(capture, vslite_ctx)
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/backend/ninjabackend.py", line 642, in generate
self.generate_rules()
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/backend/ninjabackend.py", line 1354, in generate_rules
self.generate_dynamic_link_rules()
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/backend/ninjabackend.py", line 2376, in generate_dynamic_link_rules
cmdlist = compiler.get_command_to_archive_shlib()
UnboundLocalError: local variable 'compiler' referenced before assignment
```
For this I would like to propose a simple fix that handles the scenario when compiler is empty
```
diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py
index cb3552d7f..7b573e4e4 100644
--- a/mesonbuild/backend/ninjabackend.py
+++ b/mesonbuild/backend/ninjabackend.py
@@ -2369,7 +2369,7 @@ class NinjaBackend(backends.Backend):
options = self._rsp_options(compiler)
self.add_rule(NinjaRule(rule, command, args, description, **options, extra=pool))
- if self.environment.machines[for_machine].is_aix():
+ if self.environment.machines[for_machine].is_aix() and complist:
rule = 'AIX_LINKER{}'.format(self.get_rule_suffix(for_machine))
description = 'Archiving AIX shared library'
cmdlist = compiler.get_command_to_archive_shlib()
```
Kindly let me know if can I raise a PR and you're okay with this.
| cc: @eli-schwartz
I can confirm this bug exists and the fix solves the issue.
Ah hmm, right. This happens because we only iterate over all configured project languages that also support using a linker, and then create AIX_LINKER for the last one basically. This fails for projects that don't have any configured languages, which is okay since they also do not need to run the AIX_LINKER archiving rule.
The fix seems reasonable... | 1,730,961,926,000 | null | Bug Report | [
"mesonbuild/backend/ninjabackend.py:NinjaBackend.generate_dynamic_link_rules"
] | [] |
|
ultralytics/ultralytics | ultralytics__ultralytics-18212 | 626e42ef253b5c20fa83412e7daf9b713484a866 | diff --git a/ultralytics/engine/model.py b/ultralytics/engine/model.py
index db8d87ebc2..8affd958f2 100644
--- a/ultralytics/engine/model.py
+++ b/ultralytics/engine/model.py
@@ -115,7 +115,7 @@ def __init__(
self.predictor = None # reuse predictor
self.model = None # model object
self.trainer = None # trainer object
- self.ckpt = None # if loaded from *.pt
+ self.ckpt = {} # if loaded from *.pt
self.cfg = None # if loaded from *.yaml
self.ckpt_path = None
self.overrides = {} # overrides for trainer object
@@ -807,7 +807,7 @@ def train(
# Update model and cfg after training
if RANK in {-1, 0}:
ckpt = self.trainer.best if self.trainer.best.exists() else self.trainer.last
- self.model, _ = attempt_load_one_weight(ckpt)
+ self.model, self.ckpt = attempt_load_one_weight(ckpt)
self.overrides = self.model.args
self.metrics = getattr(self.trainer.validator, "metrics", None) # TODO: no metrics returned by DDP
return self.metrics
| Saving yolov6n crashes
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
Crash on "yolov6n.yaml" save.
```
New https://pypi.org/project/ultralytics/8.3.49 available 😃 Update with 'pip install -U ultralytics'
Ultralytics 8.3.44 🚀 Python-3.10.12 torch-2.5.1+cu124 CPU (AMD Ryzen 3 3200G with Radeon Vega Graphics)
engine/trainer: task=detect, mode=train, model=yolov6n.yaml, data=coco8.yaml, epochs=1, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train11, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs/detect/train11
activation: nn.ReLU()
from n params module arguments
0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
2 -1 2 18560 ultralytics.nn.modules.conv.Conv [32, 32, 3, 1]
3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
4 -1 4 147968 ultralytics.nn.modules.conv.Conv [64, 64, 3, 1]
5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
6 -1 6 886272 ultralytics.nn.modules.conv.Conv [128, 128, 3, 1]
7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
8 -1 2 1180672 ultralytics.nn.modules.conv.Conv [256, 256, 3, 1]
9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
10 -1 1 16512 ultralytics.nn.modules.conv.Conv [256, 64, 1, 1]
11 -1 1 16448 torch.nn.modules.conv.ConvTranspose2d [64, 64, 2, 2, 0]
12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]
13 -1 1 110720 ultralytics.nn.modules.conv.Conv [192, 64, 3, 1]
14 -1 3 110976 ultralytics.nn.modules.conv.Conv [64, 64, 3, 1]
15 -1 1 2112 ultralytics.nn.modules.conv.Conv [64, 32, 1, 1]
16 -1 1 4128 torch.nn.modules.conv.ConvTranspose2d [32, 32, 2, 2, 0]
17 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]
18 -1 1 27712 ultralytics.nn.modules.conv.Conv [96, 32, 3, 1]
19 -1 3 27840 ultralytics.nn.modules.conv.Conv [32, 32, 3, 1]
20 -1 1 9280 ultralytics.nn.modules.conv.Conv [32, 32, 3, 2]
21 [-1, 15] 1 0 ultralytics.nn.modules.conv.Concat [1]
22 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 1]
23 -1 3 110976 ultralytics.nn.modules.conv.Conv [64, 64, 3, 1]
24 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
25 [-1, 10] 1 0 ultralytics.nn.modules.conv.Concat [1]
26 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 1]
27 -1 3 443136 ultralytics.nn.modules.conv.Conv [128, 128, 3, 1]
28 [19, 23, 27] 1 607360 ultralytics.nn.modules.head.Detect [80, [32, 64, 128]]
YOLOv6n summary: 195 layers, 4,500,080 parameters, 4,500,064 gradients, 13.1 GFLOPs
TensorBoard: Start with 'tensorboard --logdir runs/detect/train11', view at http://localhost:6006/
Freezing layer 'model.28.dfl.conv.weight'
train: Scanning datasets/coco8/labels/train.cache... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4 [00:00<?, ?it/s]
val: Scanning datasets/coco8/labels/val.cache... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4 [00:00<?, ?it/s]
Plotting labels to runs/detect/train11/labels.jpg...
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically...
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 53 weight(decay=0.0), 62 weight(decay=0.0005), 61 bias(decay=0.0)
TensorBoard: model graph visualization added ✅
Image sizes 640 train, 640 val
Using 0 dataloader workers
Logging results to runs/detect/train11
Starting training for 1 epochs...
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
1/1 0G 3.483 5.686 4.311 22 640: 100%|██████████| 1/1 [00:05<00:00, 5.91s/it]
Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 1/1 [00:02<00:00, 2.16s/it]
all 4 17 0 0 0 0
1 epochs completed in 0.006 hours.
Optimizer stripped from runs/detect/train11/weights/last.pt, 9.2MB
Optimizer stripped from runs/detect/train11/weights/best.pt, 9.2MB
Validating runs/detect/train11/weights/best.pt...
WARNING ⚠️ validating an untrained model YAML will result in 0 mAP.
Ultralytics 8.3.44 🚀 Python-3.10.12 torch-2.5.1+cu124 CPU (AMD Ryzen 3 3200G with Radeon Vega Graphics)
YOLOv6n summary (fused): 142 layers, 4,495,392 parameters, 0 gradients, 13.0 GFLOPs
Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 1/1 [00:01<00:00, 1.31s/it]
all 4 17 0 0 0 0
Speed: 4.7ms preprocess, 314.3ms inference, 0.0ms loss, 2.3ms postprocess per image
Results saved to runs/detect/train11
Traceback (most recent call last):
File "main.py", line 91, in <module>
detectImageWithYolo()
File "main.py", line 70, in detectImageWithYolo
model.save("yolov6coco.pt")
File ".venv/lib/python3.10/site-packages/ultralytics/engine/model.py", line 414, in save
torch.save({**self.ckpt, **updates}, filename)
TypeError: 'NoneType' object is not a mapping
```
### Environment
Ultralytics 8.3.44 🚀 Python-3.10.12 torch-2.5.1+cu124 CPU (AMD Ryzen 3 3200G with Radeon Vega Graphics)
Setup complete ✅ (4 CPUs, 23.3 GB RAM, 2769.5/3185.4 GB disk)
OS Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.10.12
Install pip
RAM 23.35 GB
Disk 2769.5/3185.4 GB
CPU AMD Ryzen 3 3200G with Radeon Vega Graphics
CPU count 4
GPU None
GPU count None
CUDA None
numpy ✅ 2.0.2>=1.23.0
numpy ✅ 2.0.2<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.9.2>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.12>=2.0.0
{'OS': 'Linux-6.8.0-49-generic-x86_64-with-glibc2.35', 'Environment': 'Linux', 'Python': '3.10.12', 'Install': 'pip', 'RAM': '23.35 GB', 'Disk': '2769.5/3185.4 GB', 'CPU': 'AMD Ryzen 3 3200G with Radeon Vega Graphics', 'CPU count': 4, 'GPU': None, 'GPU count': None, 'CUDA': None, 'Package Info': {'numpy': '✅ 2.0.2<2.0.0; sys_platform == "darwin"', 'matplotlib': '✅ 3.9.2>=3.3.0', 'opencv-python': '✅ 4.10.0.84>=4.6.0', 'pillow': '✅ 11.0.0>=7.1.2', 'pyyaml': '✅ 6.0.2>=5.3.1', 'requests': '✅ 2.32.3>=2.23.0', 'scipy': '✅ 1.14.1>=1.4.1', 'torch': '✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"', 'torchvision': '✅ 0.20.1>=0.9.0', 'tqdm': '✅ 4.67.1>=4.64.0', 'psutil': '✅ 6.1.0', 'py-cpuinfo': '✅ 9.0.0', 'pandas': '✅ 2.2.3>=1.1.4', 'seaborn': '✅ 0.13.2>=0.11.0', 'ultralytics-thop': '✅ 2.0.12>=2.0.0'}}
### Minimal Reproducible Example
```
model = YOLO("yolov6n.yaml")
model.train(data="coco8.yaml", epochs=1, imgsz=640)
model.save("yolov6coco.pt")
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
| 👋 Hello @EmmanuelMess, thank you for your interest in Ultralytics 🚀! We appreciate you taking the time to report this issue.
To help us investigate further, could you please confirm the reproducibility of this issue using the latest version of Ultralytics? You can upgrade with the command below:
```bash
pip install -U ultralytics
```
We also noticed that you've shared a reproducible example (thank you for that!). If the issue persists in the most recent version, this example will be very helpful for us to debug. Please also ensure that your `ultralytics` environment is aligned with our recommended setups:
### Environments
YOLO can be run in any of the following up-to-date verified environments (pre-installed with all necessary dependencies such as [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/)):
- **Notebooks** with free GPU: <a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo11"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/)
- **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) <a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Docker Pulls"></a>
### Join the Community
- For real-time debugging or discussions, join our [Discord](https://discord.com/invite/ultralytics) server 🎧.
- Share your insights or questions on [Discourse](https://community.ultralytics.com) or [Reddit](https://reddit.com/r/Ultralytics) to interact with the thriving community.
### Helpful Resources
If you'd like to further explore concepts or troubleshoot other issues, check out our comprehensive [Docs](https://docs.ultralytics.com), including:
- [Model Training Tips](https://docs.ultralytics.com/guides/model-training-tips/)
- [Minimum Reproducible Example Guide](https://docs.ultralytics.com/help/minimum_reproducible_example/)
If this is a 🐛 bug as appears to be the case, an Ultralytics engineer will assist you soon to look deeper into the root cause. We'll stay on top of resolving this for you! 🔍😊
## Status
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
If this badge is green, all [Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule) tests are currently passing. CI tests run on macOS, Windows, and Ubuntu frequently to maintain performance and reliability.
Let us know how it goes! 🚀
The model is automatically saved during training inside the runs/detect folder. You don't need to use `model.save()`.
> The model is automatically saved during training inside the runs/detect folder. You don't need to use `model.save()`.
But I want to explicitly save it to a file. It works with YOLOv11, why not with v6?
It doesn't work if you load from yaml.
> It doesn't work if you load from yaml.
Thanks for the explanation, but it probably shouldn't crash. Maybe add an error message? | 1,734,061,207,000 | null | Bug Report | [
"ultralytics/engine/model.py:Model.__init__",
"ultralytics/engine/model.py:Model.train"
] | [] |
|
ultralytics/ultralytics | ultralytics__ultralytics-17872 | 21162bd870444550286983a601afbfb142f4c198 | diff --git a/ultralytics/engine/predictor.py b/ultralytics/engine/predictor.py
index c28e1895d07..c5250166e9e 100644
--- a/ultralytics/engine/predictor.py
+++ b/ultralytics/engine/predictor.py
@@ -155,7 +155,7 @@ def pre_transform(self, im):
same_shapes = len({x.shape for x in im}) == 1
letterbox = LetterBox(
self.imgsz,
- auto=same_shapes and (self.model.pt or getattr(self.model, "dynamic", False)),
+ auto=same_shapes and (self.model.pt or (getattr(self.model, "dynamic", False) and not self.model.imx)),
stride=self.model.stride,
)
return [letterbox(image=x) for x in im]
| Imx500 usage example error
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
I encountered an error when running the example code from the [Sony IMX500 usage examples](https://docs.ultralytics.com/integrations/sony-imx500/#usage-examples). The image is resized to 480x640 instead of the expected 640x640, despite both the ONNX model input and the packerOut description specifying a 640x640 input.
The model is exported successfully, but unable to run the inference.
```
Export complete (298.0s)
Results saved to /home/magi/mido/ultralytics
Predict: yolo predict task=detect model=yolov8n_imx_model imgsz=640 int8
Validate: yolo val task=detect model=yolov8n_imx_model imgsz=640 data=coco.yaml int8
Visualize: https://netron.app
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Loading yolov8n_imx_model for ONNX Runtime inference...
Preferring ONNX Runtime AzureExecutionProvider
Loading yolov8n_imx_model/yolov8n_imx.onnx for ONNX IMX inference...
Found https://ultralytics.com/images/bus.jpg locally at bus.jpg
Traceback (most recent call last):
File "/home/magi/mido/ultralytics/yolo_v8_playground.py", line 13, in <module>
results = imx_model("https://ultralytics.com/images/bus.jpg")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/ultralytics/engine/model.py", line 176, in __call__
return self.predict(source, stream, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/ultralytics/engine/model.py", line 554, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 173, in __call__
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 36, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 259, in stream_inference
preds = self.inference(im, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 143, in inference
return self.model(im, augment=self.args.augment, visualize=visualize, embed=self.args.embed, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/ultralytics/nn/autobackend.py", line 542, in forward
y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/magi/mido/ultralytics/.venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: input for the following indices
index: 3 Got: 480 Expected: 640
Please fix either the inputs/outputs or the model.
```
I tested resizing the image to 640x640 before inference, and it worked properly. However, I assume that the inference call should handle the resizing automatically without requiring manual adjustment beforehand.
### Environment
Ultralytics 8.3.37 🚀 Python-3.11.10 torch-2.5.1+cu124 CUDA:0 (NVIDIA GeForce GTX 1650 Ti, 3906MiB)
Setup complete ✅ (12 CPUs, 31.0 GB RAM, 444.7/913.8 GB disk)
OS Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.11.10
Install pip
RAM 30.97 GB
Disk 444.7/913.8 GB
CPU Intel Core(TM) i7-10750H 2.60GHz
CPU count 12
GPU NVIDIA GeForce GTX 1650 Ti, 3906MiB
GPU count 1
CUDA 12.4
numpy ✅ 1.26.4>=1.23.0
numpy ✅ 1.26.4<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.9.2>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.12>=2.0.0
{'OS': 'Linux-6.8.0-49-generic-x86_64-with-glibc2.35', 'Environment': 'Linux', 'Python': '3.11.10', 'Install': 'pip', 'RAM': '30.97 GB', 'Disk': '444.7/913.8 GB', 'CPU': 'Intel Core(TM) i7-10750H 2.60GHz', 'CPU count': 12, 'GPU': 'NVIDIA GeForce GTX 1650 Ti, 3906MiB', 'GPU count': 1, 'CUDA': '12.4', 'Package Info': {'numpy': '✅ 1.26.4<2.0.0; sys_platform == "darwin"', 'matplotlib': '✅ 3.9.2>=3.3.0', 'opencv-python': '✅ 4.10.0.84>=4.6.0', 'pillow': '✅ 11.0.0>=7.1.2', 'pyyaml': '✅ 6.0.2>=5.3.1', 'requests': '✅ 2.32.3>=2.23.0', 'scipy': '✅ 1.14.1>=1.4.1', 'torch': '✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"', 'torchvision': '✅ 0.20.1>=0.9.0', 'tqdm': '✅ 4.67.1>=4.64.0', 'psutil': '✅ 6.1.0', 'py-cpuinfo': '✅ 9.0.0', 'pandas': '✅ 2.2.3>=1.1.4', 'seaborn': '✅ 0.13.2>=0.11.0', 'ultralytics-thop': '✅ 2.0.12>=2.0.0'}}
### Minimal Reproducible Example
```python
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")
# # # Export the model
model.export(format="imx") # exports with PTQ quantization by default
# Load the exported model
imx_model = YOLO("yolov8n_imx_model")
# Run inference
results = imx_model("https://ultralytics.com/images/bus.jpg")
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
| 👋 Hello @Magitoneu, thank you for your interest in Ultralytics 🚀! We appreciate you taking the time to report this issue. Here’s a quick guide to help us investigate this further:
It seems like you’re experiencing an error related to image resizing when running the Sony IMX500 example. If this is indeed a 🐛 Bug Report, we kindly request you to confirm the behavior by sharing a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/). This will help ensure we can reproduce and debug the issue effectively.
In the meantime, please ensure that you’ve updated to the latest version of the `ultralytics` package and all related requirements. You can do so by running the following command in your terminal:
```bash
pip install -U ultralytics
```
We also recommend reviewing the [Sony IMX500 integration documentation](https://docs.ultralytics.com/integrations/sony-imx500/#usage-examples) to double-check the expected behavior of the example code. It’s possible that some additional steps may be required for handling image preprocessing.
## Resources
For troubleshooting tips and to learn more ways of using the library, visit the [Ultralytics Docs](https://docs.ultralytics.com). Additional examples for Python and CLI-based workflows are available to guide you:
- [Python](https://docs.ultralytics.com/usage/python/)
- [CLI](https://docs.ultralytics.com/usage/cli/)
If you’re trying to improve your training results or explore new features, don’t miss our [Tips for Best Training Results](https://docs.ultralytics.com/guides/model-training-tips/).
## Join the Community
For real-time discussions with other Ultralytics users, join our [Discord](https://discord.com/invite/ultralytics) server 🎧. We also host conversations on [Discourse](https://community.ultralytics.com) and our [Subreddit](https://reddit.com/r/Ultralytics) for deeper discussions.
## Verified Environments
If possible, try running your code in one of our tested environments for a consistent experience:
- **Notebooks** with free GPU: <a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo11"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/)
- **Docker Image**. Check the [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) <a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Docker Pulls"></a>
## Status
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
If this badge is green, all [Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule) tests are currently passing, verifying that core functionalities across different environments are operating correctly.
This is an automated response 💡. An Ultralytics engineer will review your report and reach out with further assistance soon. Thank you for helping us improve! 😊
@Magitoneu thank you for reporting this issue. It seems that the input dimensions mismatch is due to the image size not being adjusted automatically during inference with the IMX500 export. While resizing the input image to 640x640 manually resolves the issue, automatic resizing is currently not implemented for IMX500-exported models.
As a workaround, ensure your input images are pre-processed to 640x640 before inference. For improvements in this behavior, feel free to provide additional insights, or you can open a feature request. Let us know if you need further assistance! | 1,732,862,312,000 | null | Bug Report | [
"ultralytics/engine/predictor.py:BasePredictor.pre_transform"
] | [] |
|
ultralytics/ultralytics | ultralytics__ultralytics-17728 | 426879d80d49d0180b525c4fc2484772f9f6f8cc | diff --git a/ultralytics/data/augment.py b/ultralytics/data/augment.py
index d092e3c3703..bd821de28de 100644
--- a/ultralytics/data/augment.py
+++ b/ultralytics/data/augment.py
@@ -1591,7 +1591,7 @@ def __call__(self, labels=None, image=None):
labels["ratio_pad"] = (labels["ratio_pad"], (left, top)) # for evaluation
if len(labels):
- labels = self._update_labels(labels, ratio, dw, dh)
+ labels = self._update_labels(labels, ratio, left, top)
labels["img"] = img
labels["resized_shape"] = new_shape
return labels
| Significant mAP Drop When Using Bottom-Right Padding Instead of Center Padding in YOLOv8 Training
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hi, I'm training a YOLOv8 model on the same dataset, but noticed a significant difference in mAP when changing the padding strategy.
When using center padding (default), the mAP@50 after the first epoch is around 0.88.
When using bottom-right padding instead, the mAP@50 drops to 0.0001.
I ensured that:
The same data augmentation and other settings were used in both cases.
However, the bottom-right padding leads to poor performance. What could be causing such a drastic performance drop? Is it related to padding affecting feature distribution, anchor design, or augmentation strategies? Any suggestions for improving performance in this case would be appreciated!
Thanks!
I changed "center=True" to "center=False",augment.py line1500



### Additional
_No response_
| 👋 Hello @Gebbap, thank you for bringing your findings to the Ultralytics community's attention 🚀!
We recommend checking out our [Docs](https://docs.ultralytics.com), where you can find comprehensive information on [Python](https://docs.ultralytics.com/usage/python/) and [CLI](https://docs.ultralytics.com/usage/cli/) usage, which may offer some insights into your padding strategy question.
Given this is a deep technical question related to padding strategies, we would appreciate if you could provide a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/). This will greatly assist our team in diagnosing and addressing the issue more effectively.
Meanwhile, please ensure that you're running the most up-to-date version of the `ultralytics` package. You can upgrade using the following command:
```bash
pip install -U ultralytics
```
This ensures that any recent fixes or improvements are integrated into your environment.
For additional support and insights from both engineers and experienced community members, feel free to join the conversation on our [Discord](https://ultralytics.com/discord) 🎧, or start a discussion on [Discourse](https://community.ultralytics.com) or our [Subreddit](https://reddit.com/r/ultralytics).
An Ultralytics engineer will soon review your question to provide further assistance. Thank you for your patience and support 👍!
## Environments
YOLO can be executed in various verified environments, which come pre-installed with all dependencies, including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/):
- **Notebooks** with free GPU: <a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo11"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/)
- **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) <a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Docker Pulls"></a>
## Status
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
A green badge indicates all [Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule) tests are passing. CI tests ensure the correct operation of all YOLO [Modes](https://docs.ultralytics.com/modes/) and [Tasks](https://docs.ultralytics.com/tasks/) on macOS, Windows, and Ubuntu every 24 hours and upon each commit.
Did you check if the training labels are correct in the plots?
yes,it is correct.two results used the same dateset and labels
I checked the labels after setting `center=False` and they're wrong. So this modification is incorrect and breaks the training labels which is why you're getting low scores.

Thank you! Simply modifying the value of "center" only changes the position of the plot but doesn’t adjust the labels correctly. How can I change the padding method? Should I modify the source code, or is there an official way to achieve this? | 1,732,360,098,000 | null | Performance Issue | [
"ultralytics/data/augment.py:LetterBox.__call__"
] | [] |
|
ultralytics/ultralytics | ultralytics__ultralytics-17544 | a132920476b2d38bdd58c7a232888f425f476977 | diff --git a/ultralytics/utils/callbacks/wb.py b/ultralytics/utils/callbacks/wb.py
index b82b8d85ec3..22bbc347566 100644
--- a/ultralytics/utils/callbacks/wb.py
+++ b/ultralytics/utils/callbacks/wb.py
@@ -138,7 +138,7 @@ def on_train_end(trainer):
art.add_file(trainer.best)
wb.run.log_artifact(art, aliases=["best"])
# Check if we actually have plots to save
- if trainer.args.plots:
+ if trainer.args.plots and hasattr(trainer.validator.metrics, "curves_results"):
for curve_name, curve_values in zip(trainer.validator.metrics.curves, trainer.validator.metrics.curves_results):
x, y, x_title, y_title = curve_values
_plot_curve(
| wandb callback reporting fails if no positive examples in validator
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
When using the `wandb` callback, the following error occurs if there are no positive examples:
```
Traceback (most recent call last):
File "pipeline.py", line 97, in <module>
main()
File "pipeline.py", line 84, in main
output_path = train_stage(stage_config)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "train_new.py", line 57, in train_stage
results = model.train(**train_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/ultralytics/engine/model.py", line 802, in train
self.trainer.train()
File "site-packages/ultralytics/engine/trainer.py", line 207, in train
self._do_train(world_size)
File "site-packages/ultralytics/engine/trainer.py", line 477, in _do_train
self.run_callbacks("on_train_end")
File "site-packages/ultralytics/engine/trainer.py", line 168, in run_callbacks
callback(self)
File "site-packages/ultralytics/utils/callbacks/wb.py", line 141, in on_train_end
if trainer.args.plots and trainer.validator.metrics.curves and trainer.validator.metrics.curves_results:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "site-packages/ultralytics/utils/__init__.py", line 221, in __getattr__
raise AttributeError(f"'{name}' object has no attribute '{attr}'. See valid attributes below.\n{self.__doc__}")
AttributeError: 'DetMetrics' object has no attribute 'curves_results'. See valid attributes below.
Utility class for computing detection metrics such as precision, recall, and mean average precision (mAP) of an
object detection model.
Args:
save_dir (Path): A path to the directory where the output plots will be saved. Defaults to current directory.
plot (bool): A flag that indicates whether to plot precision-recall curves for each class. Defaults to False.
on_plot (func): An optional callback to pass plots path and data when they are rendered. Defaults to None.
names (dict of str): A dict of strings that represents the names of the classes. Defaults to an empty tuple.
Attributes:
save_dir (Path): A path to the directory where the output plots will be saved.
plot (bool): A flag that indicates whether to plot the precision-recall curves for each class.
on_plot (func): An optional callback to pass plots path and data when they are rendered.
names (dict of str): A dict of strings that represents the names of the classes.
box (Metric): An instance of the Metric class for storing the results of the detection metrics.
speed (dict): A dictionary for storing the execution time of different parts of the detection process.
Methods:
process(tp, conf, pred_cls, target_cls): Updates the metric results with the latest batch of predictions.
keys: Returns a list of keys for accessing the computed detection metrics.
mean_results: Returns a list of mean values for the computed detection metrics.
class_result(i): Returns a list of values for the computed detection metrics for a specific class.
maps: Returns a dictionary of mean average precision (mAP) values for different IoU thresholds.
fitness: Computes the fitness score based on the computed detection metrics.
ap_class_index: Returns a list of class indices sorted by their average precision (AP) values.
results_dict: Returns a dictionary that maps detection metric keys to their computed values.
curves: TODO
curves_results: TODO
```
### Environment
```
Ultralytics 8.3.30 🚀 Python-3.11.5 torch-2.1.2+cu121 CUDA:0 (NVIDIA GeForce RTX 4090, 24111MiB)
Setup complete ✅ (32 CPUs, 251.5 GB RAM, 6725.0/7096.0 GB disk)
OS Linux-6.5.0-45-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.11.5
Install pip
RAM 251.52 GB
Disk 6725.0/7096.0 GB
CPU AMD Ryzen Threadripper PRO 5955WX 16-Cores
CPU count 32
GPU NVIDIA GeForce RTX 4090, 24111MiB
GPU count 3
CUDA 12.1
numpy ✅ 1.24.3>=1.23.0
matplotlib ✅ 3.7.2>=3.3.0
opencv-python ✅ 4.8.1.78>=4.6.0
pillow ✅ 10.1.0>=7.1.2
pyyaml ✅ 6.0>=5.3.1
requests ✅ 2.31.0>=2.23.0
scipy ✅ 1.11.1>=1.4.1
torch ✅ 2.1.2>=1.8.0
torchvision ✅ 0.16.2>=0.9.0
tqdm ✅ 4.65.0>=4.64.0
psutil ✅ 5.9.0
py-cpuinfo ✅ 8.0.0
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.12.2>=0.11.0
ultralytics-thop ✅ 2.0.0>=2.0.0
numpy ✅ 1.24.3<2.0.0; sys_platform == "darwin"
torch ✅ 2.1.2!=2.4.0,>=1.8.0; sys_platform == "win32"
```
### Minimal Reproducible Example
```
import wandb
from ultralytics import YOLO
from wandb.integration.ultralytics import add_wandb_callback
def train_yolo():
# Initialize wandb
wandb.init(
project="yolo-example",
name="training-run",
job_type="training"
)
# Initialize YOLO model
model = YOLO('yolov8n.yaml') # or 'yolov8n.pt' for pretrained
# Add wandb callback to log metrics
add_wandb_callback(model)
# Train the model
results = model.train(
data='coco128_negative.yaml', # path to data config file with negatives
epochs=3,
batch=16,
imgsz=640
)
# Close wandb run
wandb.finish()
if __name__ == "__main__":
train_yolo()
```
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR!
| 1,731,625,609,000 | null | Bug Report | [
"ultralytics/utils/callbacks/wb.py:on_train_end"
] | [] |
||
ultralytics/ultralytics | ultralytics__ultralytics-17499 | 496e6a3b8680e4ccd4f190e30841748aee2cb89c | diff --git a/ultralytics/engine/results.py b/ultralytics/engine/results.py
index 029e4471e04..8de0a2e6a1c 100644
--- a/ultralytics/engine/results.py
+++ b/ultralytics/engine/results.py
@@ -750,7 +750,7 @@ def save_crop(self, save_dir, file_name=Path("im.jpg")):
save_one_box(
d.xyxy,
self.orig_img.copy(),
- file=Path(save_dir) / self.names[int(d.cls)] / f"{Path(file_name)}.jpg",
+ file=Path(save_dir) / self.names[int(d.cls)] / Path(file_name).with_suffix(".jpg"),
BGR=True,
)
| Save_crop method from Results with default params results in double file extension
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
Save_crop method in the Results object already adds file extension to the saved image (.jpg) and default value is set as 'im.jpg' so when using default behaviour we get a file named **"im.jpg.jpg"**
With this few examples
`results[0].save_crop(save_dir='../out')`
`results[0].save_crop(save_dir='../out', file_name='img.png')`
`results[0].save_crop(save_dir='../out', file_name='img')`
Here are the outputs in a File Explorer, only one without double extension is when you add file_name without any extension

### Environment
Setup complete ✅ (8 CPUs, 15.9 GB RAM, 477.1/931.5 GB disk)
OS Windows-10-10.0.17763-SP0
Environment Windows
Python 3.10.15
Install pip
RAM 15.94 GB
Disk 477.1/931.5 GB
CPU Intel Core(TM) i7-9700 3.00GHz
CPU count 8
GPU NVIDIA GeForce GTX 1660, 6144MiB
GPU count 1
CUDA 11.8
numpy ✅ 1.26.3>=1.23.0
matplotlib ✅ 3.9.2>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.2.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.5.1+cu118>=1.8.0
torchvision ✅ 0.20.1+cu118>=0.9.0
tqdm ✅ 4.67.0>=4.64.0
psutil ✅ 6.1.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.11>=2.0.0
numpy ✅ 1.26.3<2.0.0; sys_platform == "darwin"
torch ✅ 2.5.1+cu118!=2.4.0,>=1.8.0; sys_platform == "win32"
### Minimal Reproducible Example
```
model = YOLO('../model/yolov8s.pt')
results = model.predict(frame, conf=0.5)
results[0].save_crop(save_dir='../out')
```
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR!
| 👋 Hello @M3nxudo, thank you for bringing this to our attention! We're excited to assist you 🚀 and appreciate your proactive approach to contribute with a PR.
For anyone facing similar issues, we highly recommend checking out our [Docs](https://docs.ultralytics.com) for guidance on both [Python](https://docs.ultralytics.com/usage/python/) and [CLI](https://docs.ultralytics.com/usage/cli/) usage.
Since this seems to be a 🐛 Bug Report, please ensure that the provided [minimal reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) accurately reflects the issue you're facing. This will help us diagnose the problem more efficiently.
For real-time help or to engage with our vibrant community, you can join us on [Discord](https://ultralytics.com/discord) 🎧. For more in-depth discussions, consider visiting our [Discourse](https://community.ultralytics.com) or share insights with members on our [Subreddit](https://reddit.com/r/ultralytics).
## Upgrade
Make sure you're using the latest version of the `ultralytics` package, along with all its [requirements](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml). Verify this in a [**Python>=3.8**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/):
```bash
pip install -U ultralytics
```
## Environments
You can run YOLO in various verified environments that include all necessary dependencies, such as [CUDA](https://developer.nvidia.com/cuda), [Python](https://www.python.org/), and [PyTorch](https://pytorch.org/):
- **Notebooks** with free GPU: <a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"/></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo11"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
- **Google Cloud** Deep Learning VM: Check the [GCP Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/)
- **Amazon** Deep Learning AMI: See the [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/)
- **Docker Image**: Refer to the [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) <a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Docker Pulls"></a>
## Status
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
Our [Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml?query=event%3Aschedule) tests run every 24 hours and on all commits to ensure smooth operation across different environments and verify correct functionality for all YOLO [Modes](https://docs.ultralytics.com/modes/) and [Tasks](https://docs.ultralytics.com/tasks/).
This is an automated response and an Ultralytics engineer will review your issue shortly. Thank you for your understanding and collaboration! 🌟
Easiest solution would be to change the default value of the file_name param from line 721 of results.py:
https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/results.py
`def save_crop(self, save_dir, file_name=Path("im.jpg")):`
to
**file_name=Path("im")**
since on line 753 we already add the file extension:
`file=Path(save_dir) / self.names[int(d.cls)] / f"{Path(file_name)}.jpg",` | 1,731,417,643,000 | null | Bug Report | [
"ultralytics/engine/results.py:Results.save_crop"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-10269 | 2739241ad189aef9372394a185b864cbbb9ab5a8 | diff --git a/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py b/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
index 6ddd9ac23009..c7474d56c708 100644
--- a/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
+++ b/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
@@ -99,10 +99,19 @@ def __init__(
self._step_index = None
self._begin_index = None
+ self._shift = shift
+
self.sigmas = sigmas.to("cpu") # to avoid too much CPU/GPU communication
self.sigma_min = self.sigmas[-1].item()
self.sigma_max = self.sigmas[0].item()
+ @property
+ def shift(self):
+ """
+ The value used for shifting.
+ """
+ return self._shift
+
@property
def step_index(self):
"""
@@ -128,6 +137,9 @@ def set_begin_index(self, begin_index: int = 0):
"""
self._begin_index = begin_index
+ def set_shift(self, shift: float):
+ self._shift = shift
+
def scale_noise(
self,
sample: torch.FloatTensor,
@@ -236,7 +248,7 @@ def set_timesteps(
if self.config.use_dynamic_shifting:
sigmas = self.time_shift(mu, 1.0, sigmas)
else:
- sigmas = self.config.shift * sigmas / (1 + (self.config.shift - 1) * sigmas)
+ sigmas = self.shift * sigmas / (1 + (self.shift - 1) * sigmas)
if self.config.shift_terminal:
sigmas = self.stretch_shift_to_terminal(sigmas)
| Allow configuring `shift=` for SD3 dynamically
**Is your feature request related to a problem? Please describe.**
Allow passing `shift=` per inference call (like timesteps) on the pipeline, for flow matching scheduler, or allow `set_shift()` etc. on the scheduler. This seems to be the key to getting good results with SD3 https://x.com/bfitzgerald242/status/1801018438120341911
| Hi, you can do it like this:
```python
from diffusers import FlowMatchEulerDiscreteScheduler
pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(pipe.scheduler.config, shift=3.0)
```
yep! but the same format is applicable for timesteps and was wondering if we can get around without re-instating the scheduler again and again?
Not for the moment, but I can see the potential in adding it as an argument if people change it a lot for each inference.
In my experience it didn't fix the anatomy problems and sometimes it made the quality worse but I tested it with the T5, still need to test it without it and do some more generations.
Can you share some examples where changing the shift helped with the generation? That would help a lot.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md) are likely to be ignored.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md) are likely to be ignored.
Useful to have, changing `shift` is popular with HunyuanVideo and SD3/Flux.
does it really help though? examples requested never were shared.
Documented benefit for HunyuanVideo.
https://github.com/huggingface/diffusers/blob/1524781b88ac1a082e755a030ba9d73cd6948e84/docs/source/en/api/pipelines/hunyuan_video.md?plain=1#L32
I'll run some tests for SD3/Flux to confirm.
but SD3 already has resolution-dependent shift using the `mu` calculations, right?
Actually this won't do anything for Flux because of dynamic shifting. We recently added support for dynamic shifting in SD3, it's not used by default though. Either way, it's a simple change that has at least some benefit to HunyuanVideo and it won't do any harm to add a function to change `shift` without creating the scheduler each time for those that want it.
well not to be argumentative but the schedulers are stateful, yes? doesn't this mean recreating it has to be done anyway? | 1,734,456,767,000 | null | Feature Request | [
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.__init__",
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler",
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.set_timesteps"
] | [
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.shift",
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.set_shift"
] |
|
huggingface/diffusers | huggingface__diffusers-10262 | f9d5a9324d77169d486a60f3b4b267c74149b982 | diff --git a/src/diffusers/models/unets/unet_2d.py b/src/diffusers/models/unets/unet_2d.py
index 5972505f2897..d05af686dede 100644
--- a/src/diffusers/models/unets/unet_2d.py
+++ b/src/diffusers/models/unets/unet_2d.py
@@ -97,6 +97,7 @@ def __init__(
out_channels: int = 3,
center_input_sample: bool = False,
time_embedding_type: str = "positional",
+ time_embedding_dim: Optional[int] = None,
freq_shift: int = 0,
flip_sin_to_cos: bool = True,
down_block_types: Tuple[str, ...] = ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D"),
@@ -122,7 +123,7 @@ def __init__(
super().__init__()
self.sample_size = sample_size
- time_embed_dim = block_out_channels[0] * 4
+ time_embed_dim = time_embedding_dim or block_out_channels[0] * 4
# Check inputs
if len(down_block_types) != len(up_block_types):
| Make `time_embed_dim` of `UNet2DModel` changeable
**Is your feature request related to a problem? Please describe.**
I want to change the `time_embed_dim` of `UNet2DModel`, but it is hard coded as `time_embed_dim = block_out_channels[0] * 4` in the `__init__` function.
**Describe the solution you'd like.**
Make `time_embedding_dim` a parameter of the `__init__` function, with the default value of `None`. Use `time_embed_dim = time_embedding_dim or block_out_channels[0] * 4` in the function body.
**Describe alternatives you've considered.**
N/A.
**Additional context.**
The same thing in `UNet2DConditionModel` can be changed via the parameter `time_embedding_dim` of its `__init__` function.
| 1,734,429,874,000 | null | Feature Request | [
"src/diffusers/models/unets/unet_2d.py:UNet2DModel.__init__"
] | [] |
||
huggingface/diffusers | huggingface__diffusers-10185 | 43534a8d1fd405fd0d1e74f991ab97f743bd3e59 | diff --git a/src/diffusers/schedulers/scheduling_repaint.py b/src/diffusers/schedulers/scheduling_repaint.py
index 97665bb5277b..ae953cfb966b 100644
--- a/src/diffusers/schedulers/scheduling_repaint.py
+++ b/src/diffusers/schedulers/scheduling_repaint.py
@@ -319,7 +319,7 @@ def step(
prev_unknown_part = alpha_prod_t_prev**0.5 * pred_original_sample + pred_sample_direction + variance
# 8. Algorithm 1 Line 5 https://arxiv.org/pdf/2201.09865.pdf
- prev_known_part = (alpha_prod_t_prev**0.5) * original_image + ((1 - alpha_prod_t_prev) ** 0.5) * noise
+ prev_known_part = (alpha_prod_t_prev**0.5) * original_image + (1 - alpha_prod_t_prev) * noise
# 9. Algorithm 1 Line 8 https://arxiv.org/pdf/2201.09865.pdf
pred_prev_sample = mask * prev_known_part + (1.0 - mask) * prev_unknown_part
| Potential bug in repaint?
https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322
According to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`?
thanks!
| I also think that should be removed as mentioned in algorithm 1 Line 5 from the [paper](https://arxiv.org/pdf/2201.09865)
```math
x_{t-1}^{known} \ =\ \sqrt{\overline{\alpha }_{t}} x_0 \ +( 1-\ \overline{\alpha }_{t}) \epsilon
```
Corrected
```python
prev_known_part = (alpha_prod_t_prev**0.5) * original_image + ((1 - alpha_prod_t_prev)) * noise
```
I don't think Fixing this might cause any issue in RePaintScheduler
@sayakpaul WDYT?
Cc: @yiyixuxu
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md) are likely to be ignored. | 1,733,904,201,000 | null | Bug Report | [
"src/diffusers/schedulers/scheduling_repaint.py:RePaintScheduler.step"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-10182 | 43534a8d1fd405fd0d1e74f991ab97f743bd3e59 | diff --git a/src/diffusers/loaders/lora_pipeline.py b/src/diffusers/loaders/lora_pipeline.py
index eb9b42c5fbb7..1445394b8784 100644
--- a/src/diffusers/loaders/lora_pipeline.py
+++ b/src/diffusers/loaders/lora_pipeline.py
@@ -2313,7 +2313,7 @@ def _maybe_expand_transformer_param_shape_or_error_(
for name, module in transformer.named_modules():
if isinstance(module, torch.nn.Linear):
module_weight = module.weight.data
- module_bias = module.bias.data if hasattr(module, "bias") else None
+ module_bias = module.bias.data if module.bias is not None else None
bias = module_bias is not None
lora_A_weight_name = f"{name}.lora_A.weight"
| Can't load multiple loras when using Flux Control LoRA
### Describe the bug
I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras.
For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it errors when loading the depth lora.
### Reproduction
```
from diffusers import FluxControlPipeline
from huggingface_hub import hf_hub_download
import torch
control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
control_pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"))
```
### Logs
```shell
AttributeError Traceback (most recent call last)
Cell In[6], line 8
5 control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
7 control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
----> 8 control_pipe.load_lora_weights(
9 hf_hub_download(
10 "ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"
11 ),
12 adapter_name="HyperFlux",
13 )
File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:1856, in FluxLoraLoaderMixin.load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs)
1849 transformer_norm_state_dict = {
1850 k: state_dict.pop(k)
1851 for k in list(state_dict.keys())
1852 if "transformer." in k and any(norm_key in k for norm_key in self._control_lora_supported_norm_keys)
1853 }
1855 transformer = getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer
-> 1856 has_param_with_expanded_shape = self._maybe_expand_transformer_param_shape_or_error_(
1857 transformer, transformer_lora_state_dict, transformer_norm_state_dict
1858 )
1860 if has_param_with_expanded_shape:
1861 logger.info(
1862 "The LoRA weights contain parameters that have different shapes that expected by the transformer. "
1863 "As a result, the state_dict of the transformer has been expanded to match the LoRA parameter shapes. "
1864 "To get a comprehensive list of parameter names that were modified, enable debug logging."
1865 )
File ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:2316, in FluxLoraLoaderMixin._maybe_expand_transformer_param_shape_or_error_(cls, transformer, lora_state_dict, norm_state_dict, prefix)
2314 if isinstance(module, torch.nn.Linear):
2315 module_weight = module.weight.data
-> 2316 module_bias = module.bias.data if hasattr(module, "bias") else None
2317 bias = module_bias is not None
2319 lora_A_weight_name = f"{name}.lora_A.weight"
AttributeError: 'NoneType' object has no attribute 'data'
```
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.12
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.5
- Transformers version: 4.47.0
- Accelerate version: 1.2.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA H100 80GB HBM3, 81559 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@a-r-r-o-w @sayakpaul
| Oh, we should have anticipated this use case. I think the correct check should be `module_bias = module.bias.data if module.bias is not None else None` instead.
Even with the above fix, I don't think the weights would load as expected because the depth control lora would expand the input features of `x_embedder` to 128, but Hyper-SD LoRA will have input features of 64. Will try and respond back shortly
cc @yiyixuxu as well
It does indeed error out with the corrected if-statement as well due to the explanation above.
<details>
<summary> trace </summary>
```bash
Traceback (most recent call last):
File "/home/aryan/work/diffusers/dump4.py", line 9, in <module>
pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"))
File "/home/aryan/work/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1868, in load_lora_weights
self.load_lora_into_transformer(
File "/home/aryan/work/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1932, in load_lora_into_transformer
transformer.load_lora_adapter(
File "/home/aryan/work/diffusers/src/diffusers/loaders/peft.py", line 320, in load_lora_adapter
incompatible_keys = set_peft_model_state_dict(self, state_dict, adapter_name, **peft_kwargs)
File "/raid/aryan/nightly-venv/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 445, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True)
File "/raid/aryan/nightly-venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for FluxTransformer2DModel:
size mismatch for x_embedder.lora_A.default_1.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([64, 128]).
```
</details>
I do believe that this should work as expected allowing for depth-control-lora to work with N-step hyper-sd-loras. This is a unique case that has probably never been investigated before. Not completely sure on how we would handle this either :/
My initial thoughts are to expand the lora shapes as well, and set the weights of the linear layer corresponding to the depth control input to 0. This should effectively remove the control latent from interfering with the effect of hyper-sd and it will operate only on the denoising latent. Will experiment and let the results speak for whether this would be something we should try to prioritize support for (as there are 10000+ available Flux loras that might be compatible), and will let YiYi and Sayak comment on how best to handle this situation if it works as expected
Are you facing any errors when trying to run inference with LoRAs, but without control LoRAs? Either way, I think above mentioned condition needs to be updated.
I just tried using the normal `FluxPipeline`, which also encounters the same issue.
Repro script:
```
from diffusers import FluxPipeline
import torch
from huggingface_hub import hf_hub_download
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
)
pipe.to("cuda")
pipe.load_lora_weights(
hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"),
)
pipe.load_lora_weights(
"strangerzonehf/Flux-Midjourney-Mix2-LoRA",
)
``` | 1,733,878,565,000 | null | Bug Report | [
"src/diffusers/loaders/lora_pipeline.py:FluxLoraLoaderMixin._maybe_expand_transformer_param_shape_or_error_"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-10176 | 09675934006cefb1eb3e58c41fca9ec372a7c797 | diff --git a/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py b/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py
index c6748ad418fe..6c36ec173539 100644
--- a/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py
+++ b/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py
@@ -446,13 +446,14 @@ def prepare_extra_step_kwargs(self, generator, eta):
extra_step_kwargs["generator"] = generator
return extra_step_kwargs
- # Copied from diffusers.pipelines.stable_diffusion_k_diffusion.pipeline_stable_diffusion_k_diffusion.StableDiffusionKDiffusionPipeline.check_inputs
def check_inputs(
self,
prompt,
height,
width,
callback_steps,
+ gligen_images,
+ gligen_phrases,
negative_prompt=None,
prompt_embeds=None,
negative_prompt_embeds=None,
@@ -499,6 +500,13 @@ def check_inputs(
f" {negative_prompt_embeds.shape}."
)
+ if gligen_images is not None and gligen_phrases is not None:
+ if len(gligen_images) != len(gligen_phrases):
+ raise ValueError(
+ "`gligen_images` and `gligen_phrases` must have the same length when both are provided, but"
+ f" got: `gligen_images` with length {len(gligen_images)} != `gligen_phrases` with length {len(gligen_phrases)}."
+ )
+
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
shape = (
@@ -814,6 +822,8 @@ def __call__(
height,
width,
callback_steps,
+ gligen_images,
+ gligen_phrases,
negative_prompt,
prompt_embeds,
negative_prompt_embeds,
| Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline`
To whom it may concern,
I found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I dig into the source code, it seems that these two features are zipped together in a for loop during the preprocessing. I guess this will cause the longer one to be clipped unintentionally. (If my understanding is wrong, feel free to correct me.) Is there any possibility to raise an error or at least warning? Thanks in advance.
Source Code: https://github.com/huggingface/diffusers/blob/v0.31.0/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L689
| Hi @abcdefg133hi. Thanks for finding this. Your understanding is correct, the longer of `gligen_phrases` and `gligen_images` will be clipped:
```python
for phrase, image in zip(["text", "text1", "text2"], ["image", "image1"]):
print(phrase, image)
text image
text1 image1
```
We should add this to `check_inputs` and raise an error when `len(gligen_images) != len(gligen_phrases)`. Note that `Copied from` will need to be removed.
https://github.com/huggingface/diffusers/blob/c9e4fab42ca481fe8e0d2456b54ec900fb57730d/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L450
Would you like to submit a PR?
| 1,733,854,068,000 | null | Bug Report | [
"src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py:StableDiffusionGLIGENTextImagePipeline.check_inputs",
"src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py:StableDiffusionGLIGENTextImagePipeline.__call__"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-10170 | 0e50401e34242dbd4b94a8a3cf0ee24afc25ea65 | diff --git a/src/diffusers/image_processor.py b/src/diffusers/image_processor.py
index 00d8588d5a2a..d6913f045ad2 100644
--- a/src/diffusers/image_processor.py
+++ b/src/diffusers/image_processor.py
@@ -236,7 +236,7 @@ def denormalize(images: Union[np.ndarray, torch.Tensor]) -> Union[np.ndarray, to
`np.ndarray` or `torch.Tensor`:
The denormalized image array.
"""
- return (images / 2 + 0.5).clamp(0, 1)
+ return (images * 0.5 + 0.5).clamp(0, 1)
@staticmethod
def convert_to_rgb(image: PIL.Image.Image) -> PIL.Image.Image:
@@ -537,6 +537,26 @@ def binarize(self, image: PIL.Image.Image) -> PIL.Image.Image:
return image
+ def _denormalize_conditionally(
+ self, images: torch.Tensor, do_denormalize: Optional[List[bool]] = None
+ ) -> torch.Tensor:
+ r"""
+ Denormalize a batch of images based on a condition list.
+
+ Args:
+ images (`torch.Tensor`):
+ The input image tensor.
+ do_denormalize (`Optional[List[bool]`, *optional*, defaults to `None`):
+ A list of booleans indicating whether to denormalize each image in the batch. If `None`, will use the
+ value of `do_normalize` in the `VaeImageProcessor` config.
+ """
+ if do_denormalize is None:
+ return self.denormalize(images) if self.config.do_normalize else images
+
+ return torch.stack(
+ [self.denormalize(images[i]) if do_denormalize[i] else images[i] for i in range(images.shape[0])]
+ )
+
def get_default_height_width(
self,
image: Union[PIL.Image.Image, np.ndarray, torch.Tensor],
@@ -752,12 +772,7 @@ def postprocess(
if output_type == "latent":
return image
- if do_denormalize is None:
- do_denormalize = [self.config.do_normalize] * image.shape[0]
-
- image = torch.stack(
- [self.denormalize(image[i]) if do_denormalize[i] else image[i] for i in range(image.shape[0])]
- )
+ image = self._denormalize_conditionally(image, do_denormalize)
if output_type == "pt":
return image
@@ -966,12 +981,7 @@ def postprocess(
deprecate("Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False)
output_type = "np"
- if do_denormalize is None:
- do_denormalize = [self.config.do_normalize] * image.shape[0]
-
- image = torch.stack(
- [self.denormalize(image[i]) if do_denormalize[i] else image[i] for i in range(image.shape[0])]
- )
+ image = self._denormalize_conditionally(image, do_denormalize)
image = self.pt_to_numpy(image)
| Post processing performance can be improved
## Problem
Images generated in batches pay a performance penalty in the post-processing step of the diffusion pipeline.
A lot of calls to image_processor.denormalize are made instead of batching the computation.
### Suggested Improvements
#### Using multiplication instead of division
This is a freebie, numerically the same but much cheaper on all compute platforms
```python
def denormalize(images: Union[np.ndarray, torch.Tensor]) -> Union[np.ndarray, torch.Tensor]:
return (images * 0.5 + 0.5).clamp(0, 1)
```
#### Adding Fast path in an all-or-nothing denormalization scenario
Instead of calling `torch.stack` on a batch where all (or none) of the images require denormalization, apply the operation on the full tensor directly
```python
def post_process(...):
# ...
if do_denormalize is None:
return self.denormalize(images) if self.config.do_normalize else images
# ...
```
#### Denormalize first, stack later
Invoking the `denormalize` call multiple times incurs a lot of overhead on the compute pipeline, more so than "wasting" compute of denormalizing everything and calling `torch.stack` on the results.
```python
def post_process(...):
# ...
denormalized = self.denormalize(image)
image = torch.stack([
denormalized[i] if do_denormalize[i] else image[i] for i in range(image.shape[0])
])
# ...
```
## Benchmarks
https://colab.research.google.com/drive/1H1SKUlyEZduUeU50V8SVEZgmFrdl6Ego?usp=sharing
These were ran in a Colab T4 instance on CUDA. The batch is of the shape `[1024,3,512,512]` and dtype `fp16`. Further combinations can be made if requested.
### Baseline
<img width="745" alt="image" src="https://github.com/user-attachments/assets/3bd79c84-bce0-47f4-aae9-5ba86b77a2de">
### Suggested Improvements
<img width="729" alt="image" src="https://github.com/user-attachments/assets/3c739ea0-71e6-467d-b36a-e07e42593003">
| 1,733,826,374,000 | null | Performance Issue | [
"src/diffusers/image_processor.py:VaeImageProcessor.denormalize",
"src/diffusers/image_processor.py:VaeImageProcessor.postprocess",
"src/diffusers/image_processor.py:VaeImageProcessorLDM3D.postprocess"
] | [
"src/diffusers/image_processor.py:VaeImageProcessor._denormalize_conditionally"
] |
||
huggingface/diffusers | huggingface__diffusers-10115 | 65ab1052b8b38687bcf37afe746a7cf20dedc045 | diff --git a/src/diffusers/models/embeddings.py b/src/diffusers/models/embeddings.py
index 91451fa9aac2..8f8f1073da74 100644
--- a/src/diffusers/models/embeddings.py
+++ b/src/diffusers/models/embeddings.py
@@ -959,7 +959,12 @@ def forward(self, ids: torch.Tensor) -> torch.Tensor:
freqs_dtype = torch.float32 if is_mps else torch.float64
for i in range(n_axes):
cos, sin = get_1d_rotary_pos_embed(
- self.axes_dim[i], pos[:, i], repeat_interleave_real=True, use_real=True, freqs_dtype=freqs_dtype
+ self.axes_dim[i],
+ pos[:, i],
+ theta=self.theta,
+ repeat_interleave_real=True,
+ use_real=True,
+ freqs_dtype=freqs_dtype,
)
cos_out.append(cos)
sin_out.append(sin)
| Some bugs in FLUX pipeline
### Describe the bug
1. missing self.theta in get_1d_rotary_pos_embed:
https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py#L961-L963
2. if prompt_embeds is None, pooled_prompt_embeds will never be computed:
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L348-L363
### Reproduction
.
### Logs
_No response_
### System Info
.
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
| `pooled_prompt_embeds` has to be passed when `prompt_embeds` is used so that's ok
https://github.com/huggingface/diffusers/blob/8421c1461bf4ab7801070d04d6ec1e6b28ee5b59/src/diffusers/pipelines/flux/pipeline_flux.py#L422-L425
Would you like to open a PR that passes `self.theta` to `get_1d_rotary_pos_embed`? | 1,733,314,229,000 | null | Bug Report | [
"src/diffusers/models/embeddings.py:FluxPosEmbed.forward"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-10086 | 827b6c25f9b78a297345f356a7d152fd6faf27d8 | diff --git a/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py b/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
index a77231cdc02d..aee1ad8c75f5 100644
--- a/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
+++ b/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
@@ -907,11 +907,7 @@ def __call__(
continue
# expand the latents if we are doing classifier free guidance
- latent_model_input = (
- torch.cat([latents] * 2)
- if self.do_classifier_free_guidance and skip_guidance_layers is None
- else latents
- )
+ latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents
# broadcast to batch dimension in a way that's compatible with ONNX/Core ML
timestep = t.expand(latent_model_input.shape[0])
@@ -935,6 +931,8 @@ def __call__(
else False
)
if skip_guidance_layers is not None and should_skip_layers:
+ timestep = t.expand(latents.shape[0])
+ latent_model_input = latents
noise_pred_skip_layers = self.transformer(
hidden_states=latent_model_input,
timestep=timestep,
| RuntimeError with LyCORIS, Batch Inference and skip_guidance_layers
### Describe the bug
A `RuntimeError` occurs when using the following combination:
* SD3
* Batch inference (`num_images_per_prompt > 1`)
* LyCORIS
* `skip_guidance_layers` is set
The error message is: `"RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 0"`
It seems that batch inference (`num_images_per_prompt > 1`) does not work in conjunction with `skip_guidance_layers`.
### Reproduction
This code snippet produces the error:
```python
self.pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
torch_dtype=torch.bfloat16
)
self.pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(
self.pipe.scheduler.config,
timestep_spacing="trailing",
shift=3.0
)
self.pipe.to("cuda")
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, my_lora, self.pipe.transformer)
wrapper.merge_to()
image = self.pipe(
prompt=request.prompt,
num_inference_steps=request.num_inference_steps,
num_images_per_prompt=2, # Batch inference
output_type="pil",
generator=torch.Generator(device="cuda").manual_seed(42),
guidance_scale=request.guidance_scale,
width=request.width,
height=request.height,
skip_guidance_layers=[7, 8, 9], # Doesn't seem to work with batching
).images[0]
```
Commenting out `skip_guidance_layers` resolves the error.
**Expected behavior**
Batch inference should work correctly even when `skip_guidance_layers` is used with LyCORIS.
### Logs
_No response_
### System Info
**Environment**
* CUDA Version: 12.4
* Python version: 3.12.1 (main, Jan 11 2024, 10:22:40) [GCC 10.2.1 20210110]
* Diffusers version: https://github.com/huggingface/diffusers.git@99c0483b67427de467f11aa35d54678fd36a7ea2
* The specific LyCORIS model and inference method used from Bghira: https://huggingface.co/bghira/sd35m-photo-mixedres-cL-sS3-noOverride?not-for-all-audiences=true
### Who can help?
@sayakpaul
| This is not a fully reproducible snippet. Please provide one.
Cc: @asomoza and @yiyixuxu for skip layer guidance.
Reproducible with:
```bash
pip install lycoris_lora
```
```python
from diffusers import StableDiffusion3Pipeline, FlowMatchEulerDiscreteScheduler
import torch
from huggingface_hub import hf_hub_download
from lycoris import create_lycoris_from_weights
pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3.5-medium", torch_dtype=torch.bfloat16
)
pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(
pipe.scheduler.config, timestep_spacing="trailing", shift=3.0
)
pipe.to("cuda")
weights_path = hf_hub_download(
"bghira/sd35m-photo-mixedres-cL-sS3-noOverride",
filename="pytorch_lora_weights.safetensors",
)
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, weights_path, pipe.transformer)
wrapper.merge_to()
prompt = "A photo-realistic image of a cat"
num_inference_steps = 28
guidance_scale = 3.5
width = 1024
height = 1024
image = pipe(
prompt=prompt,
num_inference_steps=num_inference_steps,
num_images_per_prompt=2, # Batch inference
output_type="pil",
generator=torch.Generator(device="cuda").manual_seed(42),
guidance_scale=guidance_scale,
width=width,
height=height,
skip_guidance_layers=[7, 8, 9], # Doesn't seem to work with batching
).images[0]
```
```python
Traceback (most recent call last):
File "/workspace/test.py", line 30, in <module>
image = pipe(
^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/diffusers/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 918, in __call__
noise_pred = self.transformer(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/diffusers/src/diffusers/models/transformers/transformer_sd3.py", line 393, in forward
temb = self.time_text_embed(timestep, pooled_projections)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/diffusers/src/diffusers/models/embeddings.py", line 1208, in forward
conditioning = timesteps_emb + pooled_projections
~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 0
```
So, SLG seems to be the culprit here?
> So, SLG seems to be the culprit here?
Indeed, problem seems to appear since I started using SLG
After some testing lycoris is unrelated, minimally reproducible with
```python
from diffusers import StableDiffusion3Pipeline, FlowMatchEulerDiscreteScheduler
import torch
pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3.5-medium", torch_dtype=torch.bfloat16
)
pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(
pipe.scheduler.config, timestep_spacing="trailing", shift=3.0
)
pipe.to("cuda")
prompt = "A photo-realistic image of a cat"
num_inference_steps = 28
guidance_scale = 3.5
width = 1024
height = 1024
image = pipe(
prompt=prompt,
num_inference_steps=num_inference_steps,
num_images_per_prompt=2,
output_type="pil",
generator=torch.Generator("cuda").manual_seed(42),
guidance_scale=guidance_scale,
width=width,
height=height,
skip_guidance_layers=[7, 8, 9],
).images[0]
image.save("minimal.png")
```
```
latent_model_input: torch.Size([2, 16, 128, 128])
timestep: torch.Size([2])
prompt_embeds: torch.Size([4, 333, 4096])
pooled_prompt_embeds: torch.Size([4, 2048])
```
Batch discrepancy, from
https://github.com/huggingface/diffusers/blob/827b6c25f9b78a297345f356a7d152fd6faf27d8/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L912
Same discrepancy with num_images_per_prompt=1, I think in this case PyTorch broadcasting takes care of it
```
latent_model_input: torch.Size([1, 16, 128, 128])
timestep: torch.Size([1])
prompt_embeds: torch.Size([2, 333, 4096])
pooled_prompt_embeds: torch.Size([2, 2048])
```
Ends up with similar issue in the next section. I'll do more testing and put together a PR.
https://github.com/huggingface/diffusers/blob/827b6c25f9b78a297345f356a7d152fd6faf27d8/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L938-L946
| 1,733,160,636,000 | null | Bug Report | [
"src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py:StableDiffusion3Pipeline.__call__"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-10067 | 827b6c25f9b78a297345f356a7d152fd6faf27d8 | diff --git a/src/diffusers/models/upsampling.py b/src/diffusers/models/upsampling.py
index cf07e45b0c5c..af04ae4b93cf 100644
--- a/src/diffusers/models/upsampling.py
+++ b/src/diffusers/models/upsampling.py
@@ -165,6 +165,14 @@ def forward(self, hidden_states: torch.Tensor, output_size: Optional[int] = None
# if `output_size` is passed we force the interpolation output
# size and do not make use of `scale_factor=2`
if self.interpolate:
+ # upsample_nearest_nhwc also fails when the number of output elements is large
+ # https://github.com/pytorch/pytorch/issues/141831
+ scale_factor = (
+ 2 if output_size is None else max([f / s for f, s in zip(output_size, hidden_states.shape[-2:])])
+ )
+ if hidden_states.numel() * scale_factor > pow(2, 31):
+ hidden_states = hidden_states.contiguous()
+
if output_size is None:
hidden_states = F.interpolate(hidden_states, scale_factor=2.0, mode="nearest")
else:
| [BUG - STABLE DIFFUSION 3] Grey images generated
### Describe the bug
I'm running the SD3 model [stabilityai/stable-diffusion-3-medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium) with the following settings:
- Height: 1024
- Width: 1024
- Inference steps: 50
- Guidance scale: 7
- Prompts length: 32 (using the same input prompt: "A men jumps from a high building")
The model generates 32 images, but half of them are grey.
### Reproduction
Reproduction code:
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '7'
import torch
from diffusers import StableDiffusion3Pipeline
base_dir = os.path.join(os.path.dirname(__file__), '..', '..')
def run():
input_prompts = ["A men jumps from a high building"] * 32
img_save_dir = f'{base_dir}/data/test_generated_img'
os.makedirs(f'{img_save_dir}', exist_ok=True)
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16, cache_dir="/data0/tien/cache")
pipe.to("cuda")
torch.set_grad_enabled(False)
images = pipe(
prompt=input_prompts,
negative_prompt="",
height=1024,
width=1024,
num_inference_steps=50,
guidance_scale=7.0
).images
for j in range(len(input_prompts)):
images[j].save(os.path.join(f'{img_save_dir}', f'{j}.jpg'))
torch.cuda.empty_cache()
run()
```
### Logs
_No response_
### System Info
- Device: H100
- Driver Version: 550.127.05
- CUDA Version: 12.4
- Torch: 2.5.1+cu124
- OS: Ubuntu 22.04.3 LTS
- Python: 3.10.15
- diffusers: 0.31.0
### Who can help?
_No response_
| Does this happen when you switch to torch.bfloat16? Also, was this working ever as expected and suddenly stopped working as expected?
@sayakpaul Hi, the error persists when using BF16.
Out of the 32 generated images, the first 16 are fine, but the last 16 are all in grayscale.
Weird.
Does it happen on other prompts?
Yes, it appears that this error occurs with any prompt.
@asomoza have you heard about this before?
no, but SD 3 had some other problems so probably no one else did some extensive tests or generations with it.
@hoangvictor is there a reason you're using [stable-diffusion-3-medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium) instead of [stable-diffusion-3.5-medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium)?
@asomoza SD3.5 has been out since October, but when I started my project, only SD3 was available. I’ve also tried SD3.5, and the issue persists. It appears the problem lies with the VAE during the image decoding process.
I did a test with an A100 80GB, I could only do 24 images or I get OOM. I could replicate your error, this is a rare test for me because I would never throw 32 times the same prompt at the model instead of doing it in a loop because I like to see the results and most people just use a batch of 4.
Anyway, after the 16th image, the rest are all grey images so I can confirm this issue cc: @sayakpaul
Hmm, @DN6 could this be an issue with prompt encoding?
What happens when we generate 24 images for a single prompt by specifying `num_images_per_prompt`? Also does it happen with other pipelines too?
Very weird that it happens after we request for a certain number of images no?
It also happens with `num_images_per_prompt` when it's more than 16.
Tested it with SD 1.5 and this issue didn't happen.
It happens with any number of images when it's more than 16.
This is a PyTorch bug / limitation in the upsample operation. I found a workaround that I'll push tomorrow. Meanwhile, you can retrieve the latents instead of the images using `output_type="latent"` in your pipeline invocation, and then decode them manually like this:
```python
with torch.inference_mode():
latents = (latents / pipe.vae.config.scaling_factor) + pipe.vae.config.shift_factor
images_1 = pipe.vae.decode(latents[:16], return_dict=False)[0]
images_2 = pipe.vae.decode(latents[16:], return_dict=False)[0]
images = torch.cat([images_1, images_2])
images = pipe.image_processor.postprocess(images)
``` | 1,733,055,063,000 | null | Bug Report | [
"src/diffusers/models/upsampling.py:Upsample2D.forward"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-9978 | 64b3e0f5390728f62887be7820a5e2724d0fb419 | diff --git a/src/diffusers/loaders/single_file_utils.py b/src/diffusers/loaders/single_file_utils.py
index d1bad8b5a7cd..9a460cb5d1ef 100644
--- a/src/diffusers/loaders/single_file_utils.py
+++ b/src/diffusers/loaders/single_file_utils.py
@@ -62,7 +62,14 @@
"xl_base": "conditioner.embedders.1.model.transformer.resblocks.9.mlp.c_proj.bias",
"xl_refiner": "conditioner.embedders.0.model.transformer.resblocks.9.mlp.c_proj.bias",
"upscale": "model.diffusion_model.input_blocks.10.0.skip_connection.bias",
- "controlnet": "control_model.time_embed.0.weight",
+ "controlnet": [
+ "control_model.time_embed.0.weight",
+ "controlnet_cond_embedding.conv_in.weight",
+ ],
+ # TODO: find non-Diffusers keys for controlnet_xl
+ "controlnet_xl": "add_embedding.linear_1.weight",
+ "controlnet_xl_large": "down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.weight",
+ "controlnet_xl_mid": "down_blocks.1.attentions.0.norm.weight",
"playground-v2-5": "edm_mean",
"inpainting": "model.diffusion_model.input_blocks.0.0.weight",
"clip": "cond_stage_model.transformer.text_model.embeddings.position_embedding.weight",
@@ -96,6 +103,9 @@
"inpainting": {"pretrained_model_name_or_path": "stable-diffusion-v1-5/stable-diffusion-inpainting"},
"inpainting_v2": {"pretrained_model_name_or_path": "stabilityai/stable-diffusion-2-inpainting"},
"controlnet": {"pretrained_model_name_or_path": "lllyasviel/control_v11p_sd15_canny"},
+ "controlnet_xl_large": {"pretrained_model_name_or_path": "diffusers/controlnet-canny-sdxl-1.0"},
+ "controlnet_xl_mid": {"pretrained_model_name_or_path": "diffusers/controlnet-canny-sdxl-1.0-mid"},
+ "controlnet_xl_small": {"pretrained_model_name_or_path": "diffusers/controlnet-canny-sdxl-1.0-small"},
"v2": {"pretrained_model_name_or_path": "stabilityai/stable-diffusion-2-1"},
"v1": {"pretrained_model_name_or_path": "stable-diffusion-v1-5/stable-diffusion-v1-5"},
"stable_cascade_stage_b": {"pretrained_model_name_or_path": "stabilityai/stable-cascade", "subfolder": "decoder"},
@@ -481,8 +491,16 @@ def infer_diffusers_model_type(checkpoint):
elif CHECKPOINT_KEY_NAMES["upscale"] in checkpoint:
model_type = "upscale"
- elif CHECKPOINT_KEY_NAMES["controlnet"] in checkpoint:
- model_type = "controlnet"
+ elif any(key in checkpoint for key in CHECKPOINT_KEY_NAMES["controlnet"]):
+ if CHECKPOINT_KEY_NAMES["controlnet_xl"] in checkpoint:
+ if CHECKPOINT_KEY_NAMES["controlnet_xl_large"] in checkpoint:
+ model_type = "controlnet_xl_large"
+ elif CHECKPOINT_KEY_NAMES["controlnet_xl_mid"] in checkpoint:
+ model_type = "controlnet_xl_mid"
+ else:
+ model_type = "controlnet_xl_small"
+ else:
+ model_type = "controlnet"
elif (
CHECKPOINT_KEY_NAMES["stable_cascade_stage_c"] in checkpoint
@@ -1072,6 +1090,9 @@ def convert_controlnet_checkpoint(
config,
**kwargs,
):
+ # Return checkpoint if it's already been converted
+ if "time_embedding.linear_1.weight" in checkpoint:
+ return checkpoint
# Some controlnet ckpt files are distributed independently from the rest of the
# model components i.e. https://huggingface.co/thibaud/controlnet-sd21/
if "time_embed.0.weight" in checkpoint:
| ControlNet broken from_single_file
### Describe the bug
controlnet loader from_single_file was originally added via #4084
and method `ControlNet.from_single_file()` works for non-converted controlnets.
but for controlnets in safetensors format that contain already converted state_dict, it errors out.
its not reasonable to expect from user to know what is the internal dict structure of the controlnet safetensors file
before he can use it.
even worse, some of the newer controlnets are distributed as single-file-only and are already in diffusers format
which makes them impossible to load in difufsers.
for example: <https://huggingface.co/Laxhar/noob_openpose/tree/main>
this issue was already mentioned several times, each time closed as "works as designed"
when in reality its just a failure that should be addressed as an issue.
see #8474 #9208 #8614 as examples of previous issues
### Reproduction
scenario-1: works with non-converted controlnet
```python
import torch
from diffusers import ControlNetModel
from huggingface_hub import hf_hub_download
local_path = hf_hub_download(repo_id='Aptronym/SDNext', filename='ControlNet11/controlnet11Models_canny.safetensors')
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)
print(cn.__class__)
```
scenario-1: fails for majority of controlnets available on huggingface
```python
import torch
from diffusers import ControlNetModel
from huggingface_hub import hf_hub_download
local_path = hf_hub_download(repo_id='lllyasviel/sd_control_collection', filename='diffusers_xl_canny_small.safetensors')
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)
print(cn.__class__)
```
initial failure is nonsense
> OSError: stable-diffusion-v1-5/stable-diffusion-v1-5 does not appear to have a file named config.json.
whats making this worse is that SD15 and SDXL share the same `ControlNet` class which causes some
confusion on the base repo where to lookup config.
e.g,, here we're loading SDXL controlnet and error referrs to SD15 repo.
anyhow, trying to force correct config:
```py
cn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16, config='diffusers/controlnet-canny-sdxl-1.0-small')
```
results in even worse nonsense failure during loading of state_dict:
> TypeError: is_floating_point(): argument 'input' (position 1) must be Tensor, not NoneType
### System Info
diffusers=0.32.0.dev0
python==3.12.3
torch==2.5.1+cu124
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
| > even worse, some of the newer controlnets are distributed as single-file-only and are already in diffusers format
which makes them impossible to load in difufsers.
for example: https://huggingface.co/Laxhar/noob_openpose/tree/main
Isn't this an actual error as it is partially in the diffusers format by not including the `config.json`?
@sayakpaul cannot blame users for not knowing internals of `diffusers`.
99% of users will attempt to download that safetensors file and load it - and it will fail.
sure, author of that specific repo should add `config.json` (and yes, request to author is already created),
but are we going to re-educate every user "don't download controlnet safetensors"?
(and there is no way to differentiate from user perspective which ones can and which cannot be downloaded)
In that case what looks like a good solution to you? I will let @DN6 to of course comment here but IMO the following could be good candidates:
* Detect if this is a `diffusers` checkpoint and if the supplied path doesn't have a `config.json`. If detected, yield a nice error message.
* Detect if this is a `diffusers` checkpoint and try to fetch a `config.json` but I am not sure how well this would play out as it involves quite a bit of guess work from what I can see.
> Detect if this is a diffusers checkpoint and try to fetch a config.json but I am not sure how well this would play out as it involves quite a bit of guess work from what I can see.
which guesswork? is it sd15 or sdxl should be easy to infer from state_dict itself.
that's how it works when loading main dffusion model using from_single_file. if i were a user, i would have same expectation here.
Hmm could be. I thought too fast. So, fetch the config, load the state dict -- looks like the workflow here.
```py
with init_empty_weights():
config = ControlNetModel.load_config("...")
controlnet_model = ControlNetModel.from_config(config)
controlnet_model.load_state_dict(diffusers_state_dict)
```
> Hmm could be. I thought too fast. So, fetch the config, load the state dict -- looks like the workflow here.
btw, as a starting point, we can pass `config=...` arg to `from_single_file` and add autodetect later.
but need to make `from_single_file` actually work :)
In the first case the model config is being detected as `v1` because the checked keys for `controlnet` are for non-Diffusers type checkpoints.
https://github.com/huggingface/diffusers/blob/f6f7afa1d7c6f45f8568c5603b1e6300d4583f04/src/diffusers/loaders/single_file_utils.py#L457
https://github.com/huggingface/diffusers/blob/f6f7afa1d7c6f45f8568c5603b1e6300d4583f04/src/diffusers/loaders/single_file_utils.py#L65
`infer_diffusers_model_type` can also be extended to detect e.g. `v1` vs `xl` ControlNet.
In the second case `convert_controlnet_checkpoint` expects non-Diffusers type, the state dict it returns appears to only contain `controlnet_mid_block.weight` and `controlnet_mid_block.bias` as `None` which in turn causes the error when the state dict is loaded to the model.
`convert_controlnet_checkpoint` can return the provided checkpoint when Diffusers keys are detected.
https://github.com/huggingface/diffusers/blob/f6f7afa1d7c6f45f8568c5603b1e6300d4583f04/src/diffusers/loaders/single_file_utils.py#L1169-L1170
https://github.com/huggingface/diffusers/blob/f6f7afa1d7c6f45f8568c5603b1e6300d4583f04/src/diffusers/loaders/single_file_utils.py#L1070
https://github.com/huggingface/diffusers/blob/f6f7afa1d7c6f45f8568c5603b1e6300d4583f04/src/diffusers/models/model_loading_utils.py#L170
| 1,732,125,328,000 | null | Bug Report | [
"src/diffusers/loaders/single_file_utils.py:infer_diffusers_model_type",
"src/diffusers/loaders/single_file_utils.py:convert_controlnet_checkpoint"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-9885 | 5588725e8e7be497839432e5328c596169385f16 | diff --git a/src/diffusers/utils/dynamic_modules_utils.py b/src/diffusers/utils/dynamic_modules_utils.py
index f0cf953924ad..50d9bbaac57c 100644
--- a/src/diffusers/utils/dynamic_modules_utils.py
+++ b/src/diffusers/utils/dynamic_modules_utils.py
@@ -325,7 +325,7 @@ def get_cached_module_file(
# We always copy local files (we could hash the file to see if there was a change, and give them the name of
# that hash, to only copy when there is a modification but it seems overkill for now).
# The only reason we do the copy is to avoid putting too many folders in sys.path.
- shutil.copy(resolved_module_file, submodule_path / module_file)
+ shutil.copyfile(resolved_module_file, submodule_path / module_file)
for module_needed in modules_needed:
if len(module_needed.split(".")) == 2:
module_needed = "/".join(module_needed.split("."))
@@ -333,7 +333,7 @@ def get_cached_module_file(
if not os.path.exists(submodule_path / module_folder):
os.makedirs(submodule_path / module_folder)
module_needed = f"{module_needed}.py"
- shutil.copy(os.path.join(pretrained_model_name_or_path, module_needed), submodule_path / module_needed)
+ shutil.copyfile(os.path.join(pretrained_model_name_or_path, module_needed), submodule_path / module_needed)
else:
# Get the commit hash
# TODO: we will get this info in the etag soon, so retrieve it from there and not here.
@@ -350,7 +350,7 @@ def get_cached_module_file(
module_folder = module_file.split("/")[0]
if not os.path.exists(submodule_path / module_folder):
os.makedirs(submodule_path / module_folder)
- shutil.copy(resolved_module_file, submodule_path / module_file)
+ shutil.copyfile(resolved_module_file, submodule_path / module_file)
# Make sure we also have every file with relative
for module_needed in modules_needed:
| Replace shutil.copy with shutil.copyfile
shutil.copy copies permission bits which fails when the user who's running the script is trying to use a common cache that was generated by another user, even though the first user has read & write permissions over the cache (through Group permission for example). A real case scenario: submitting jobs on a GPU cluster accessed by multiple users with common cache directory to reduce disk and network usage.
https://github.com/huggingface/diffusers/blob/0d1d267b12e47b40b0e8f265339c76e0f45f8c49/src/diffusers/utils/dynamic_modules_utils.py#L328
Suggested solution: replace shutil.copy by shutil.copyfile which doesn't copy ownership.
| Maybe related to https://github.com/huggingface/huggingface_hub/pull/1220, https://github.com/huggingface/diffusers/issues/1517, https://github.com/huggingface/huggingface_hub/issues/1141.
For info I'm using v0.23.0.
@Wauplin WDYT?
Agree with using `shutil.copyfile` yes! I didn't thought about permission issues back then. copyfile will only copy the file content but that is what we want here :)
@almarouk would you be willing to open a PR to help us? | 1,731,005,873,000 | null | Bug Report | [
"src/diffusers/utils/dynamic_modules_utils.py:get_cached_module_file"
] | [] |
|
fortra/impacket | fortra__impacket-1860 | e9a47ffc2b56755908b4a0e73348c650cf5c723f | diff --git a/impacket/examples/secretsdump.py b/impacket/examples/secretsdump.py
index 43b776218..537c45dab 100644
--- a/impacket/examples/secretsdump.py
+++ b/impacket/examples/secretsdump.py
@@ -1432,7 +1432,7 @@ def dump(self):
userName = V[userAccount['NameOffset']:userAccount['NameOffset']+userAccount['NameLength']].decode('utf-16le')
if userAccount['NTHashLength'] == 0:
- logging.error('SAM hashes extraction for user %s failed. The account doesn\'t have hash information.' % userName)
+ logging.debug('The account %s doesn\'t have hash information.' % userName)
continue
encNTHash = b''
| SAM Dump for accounts without secrets
I realised that some defaults Windows accounts, like for example WDAGUtilityAccount, throw the following error:

However there is no error here. WDAGUtilisatyAccount does not have a NT hash in the SAM database because this is a virtual account used to contain applications in a sandbox (for example browsers) and these featuers are not used on windows servers. Considering I never saw secretsdump failing in dumping SAM database, I believe it is possible to switch the following liens from impacket/impacket/examples/secretsdump.py:
```python
if userAccount['NTHashLength'] == 0:
logging.error('SAM hashes extraction for user %s failed. The account doesn\'t have hash information.' % userName)
continue
```
to
```python
if userAccount['NTHashLength'] == 0:
logging.debug('SAM hashes extraction for user %s failed. The account doesn\'t have hash information.' % userName)
continue
```
That way most of tools using impacket secretsdump won't have a messed up output.
Let me know what you think about this :)
| Hi @Dfte,
Which configuration are you running on? I tried here with a Windows Server 2019 in azure and the account `WDAGUtilityAccount` has a hash that is printed when running secretsdump
Also rechecked that the account was disabled (according to #802), and it was

I'm running a Windows Server 2019 v1809
I have tested this against a domain controller. Now it's ovbious that there are no hashes for such accounts in the SAM database which is disabled anyway for domain controllers. But is there really a reason to display an error for such use cases considering there is no way for a local account not to have at least a default LM/NT hash ?
Hey,
mm I don't think so.
I wanted to better understand / replay that scenario to check if there's some property in those objects that could help us realize it's not an issue not having gathered the hash.
But yes, I agree with you that we shouldn't be showing this case as an error
Will be playing with it a bit more and can create a PR to change it | 1,733,518,382,000 | null | Feature Request | [
"impacket/examples/secretsdump.py:SAMHashes.dump"
] | [] |
|
fortra/impacket | fortra__impacket-1858 | af51dfd1e0bf4472200b4a2d560cd70df7904f9c | diff --git a/impacket/dhcp.py b/impacket/dhcp.py
index aeabb4013..dde0c3909 100644
--- a/impacket/dhcp.py
+++ b/impacket/dhcp.py
@@ -178,7 +178,7 @@ def unpackOptions(self, options):
# size = self.calcUnpackSize(format, options[i+1:])
size = options[i+1]
# print i, name, format, size
- value = self.unpack(format, options[i+2:i+2+size])
+ value = self.unpack(format, bytes(options[i+2:i+2+size]))
answer.append((name, value))
i += 2+size
| dhcp.py: decode error "object has no attribute 'encode'"
### Configuration
impacket version: HEAD
Python version: 3.11
Target OS: Linux
### Debug Output With Command String
````
dhcp = DhcpPacket(buffer)
print(dhcp)
``````
```
Traceback (most recent call last):
File "/home/mdt/Source/emdete/honeypot/snooper/main.py", line 14, in learn_dhcp
dhcp = DHCP(buffer)
^^^^^^^^^^^^
File "/home/mdt/Source/oss/python/impacket/impacket/dhcp.py", line 148, in __init__
structure.Structure.__init__(self, data, alignment)
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 87, in __init__
self.fromString(data)
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 152, in fromString
self[field[0]] = self.unpack(field[1], data[:size], dataClassOrCode = dataClassOrCode, field = field[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 307, in unpack
return eval(dataClassOrCode, {}, fields)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <module>
File "/home/mdt/Source/oss/python/impacket/impacket/dhcp.py", line 179, in unpackOptions
value = self.unpack(format, options[i+2:i+2+size])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 382, in unpack
return dataClassOrCode(data)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/six.py", line 644, in b
return s.encode("latin-1")
^^^^^^^^
AttributeError: ("'bytearray' object has no attribute 'encode'", "When unpacking field 'options | _ | b''[:0]'")
```
### PCAP
```
b'\x01\x01\x06\x00\xa3Y\xdf\x06\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf84A\xdb2.\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00c\x82Sc5\x01\x032\x04\xc0\xa8\x00\x8c6\x04\xc0\xa8\x00\x019\x02\x02@7\x07\x01\x03\x06\x0c\x0f\x1c*<\x0cudhcp 1.36.1=\x07\x01\xf84A\xdb2.\xff\x00\x00\x00\x00\x00\x00\x00\x00'
```
### Additional context
| it seems the line 381 in `impacket/structure.py` should read:
```
if (isinstance(data, bytes) or isinstance(data, bytearray)) and dataClassOrCode is b:
```
because the buffer is not bytes but bytearray. fixing this leads to the next error:
```
Traceback (most recent call last):
File "/home/mdt/Source/emdete/honeypot/snooper/main.py", line 14, in learn_dhcp
dhcp = DhcpPacket(buffer)
^^^^^^^^^^^^^^^^^^
File "/home/mdt/Source/oss/python/impacket/impacket/dhcp.py", line 148, in __init__
structure.Structure.__init__(self, data, alignment)
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 87, in __init__
self.fromString(data)
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 152, in fromString
self[field[0]] = self.unpack(field[1], data[:size], dataClassOrCode = dataClassOrCode, field = field[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 307, in unpack
return eval(dataClassOrCode, {}, fields)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <module>
File "/home/mdt/Source/oss/python/impacket/impacket/dhcp.py", line 179, in unpackOptions
value = self.unpack(format, options[i+2:i+2+size])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mdt/Source/oss/python/impacket/impacket/structure.py", line 386, in unpack
return unpack(format, data)[0]
^^^^^^^^^^^^^^^^^^^^
struct.error: ('unpack requires a buffer of 4 bytes', "When unpacking field 'options | _ | b''[:0]'")
```
I also got this error, here is my code:
```
import struct
from impacket import ImpactDecoder
from impacket import dhcp
import socket
dhcp_packet = (
b'\x01' # Message type: Boot Request (1 byte)
b'\x01' # Hardware type: Ethernet (1 byte)
b'\x06' # Hardware address length: 6 (1 byte)
b'\x00' # Hops: 0 (1 byte)
b'\x39\x03\xF3\x26' # Transaction ID: Random (4 bytes)
b'\x00\x00' # Seconds elapsed: 0 (2 bytes)
b'\x80\x00' # Flags: 0x8000 (Broadcast) (2 bytes)
b'\x00\x00\x00\x00' # Client IP address: 0.0.0.0 (4 bytes)
b'\x00\x00\x00\x00' # Your (client) IP address: 0.0.0.0 (4 bytes)
b'\x00\x00\x00\x00' # Next server IP address: 0.0.0.0 (4 bytes)
b'\x00\x00\x00\x00' # Relay agent IP address: 0.0.0.0 (4 bytes)
b'\x00\x26\x9e\x04\x0a\x5b' # Client MAC address (6 bytes)
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' # Client hardware address padding (10 bytes)
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' # Server name padding (64 bytes)
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' # Boot filename padding (128 bytes)
b'\x63\x82\x53\x63' # Magic cookie: DHCP (4 bytes)
# DHCP Options:
b'\x35\x01\x01'
b'\x32\x04\xc0\xa8\x01\x64'
b'\x33\x04\x01\xa8\x01\x64' # Option: (53) DHCP Message Type (Discover) (3 bytes)
# b'\x3d\x07\x01\x00\x26\x9e\x04\x0a\x5b' # Option: (61) Client identifier (7 bytes)
b'\x37\x01\x03\x01\x06' # Option: (55) Parameter Request List (5 bytes)
# b'\xff' # End Option (1 byte)
)
dhcp_decoder = ImpactDecoder.BootpDecoder()
dhcp_packet_decoded = dhcp_decoder.decode(dhcp_packet)
print(dhcp_packet_decoded)
```
Could anyone take a look at it?
As @emdete mentioned, this error can be solved by modifying the following condition to also include `bytearray` objects https://github.com/fortra/impacket/blob/3ce41be452dfe578f7edea16bc816e4f7fabe04d/impacket/structure.py#L384
The fix _may_ cause some unwanted splash damage so further testing is required.
Another option would be to fix the method responsible for calling structure.unpack with bytearray as an argument (instead of bytes). It was recently introduced by another PR: https://github.com/fortra/impacket/commit/3f645107bb4db65fd8a328399031a257723c6bfb.
We could convert `options[...]` to `bytes` in the following line: https://github.com/fortra/impacket/blob/3ce41be452dfe578f7edea16bc816e4f7fabe04d/impacket/dhcp.py#L181
Both alternatives fix the problem | 1,733,378,609,000 | null | Bug Report | [
"impacket/dhcp.py:DhcpPacket.unpackOptions"
] | [] |
|
modin-project/modin | modin-project__modin-7400 | 78674005577efea7aa7c5e3e7c6fb53bd0365fe5 | diff --git a/modin/pandas/dataframe.py b/modin/pandas/dataframe.py
index de96ea0ab26..2ce83913ebb 100644
--- a/modin/pandas/dataframe.py
+++ b/modin/pandas/dataframe.py
@@ -2074,12 +2074,12 @@ def squeeze(
Squeeze 1 dimensional axis objects into scalars.
"""
axis = self._get_axis_number(axis) if axis is not None else None
- if axis is None and (len(self.columns) == 1 or len(self.index) == 1):
+ if axis is None and (len(self.columns) == 1 or len(self) == 1):
return Series(query_compiler=self._query_compiler).squeeze()
if axis == 1 and len(self.columns) == 1:
self._query_compiler._shape_hint = "column"
return Series(query_compiler=self._query_compiler)
- if axis == 0 and len(self.index) == 1:
+ if axis == 0 and len(self) == 1:
qc = self.T._query_compiler
qc._shape_hint = "column"
return Series(query_compiler=qc)
| Avoid unnecessary length checks in `df.squeeze`
It is possible that when `axis=1` in squeeze we still check `len(self.index)`, which is never necessary when `axis=1`. Link to code here: https://github.com/modin-project/modin/blob/eac3c77baf456c7bd7e1e5fde81790a4ed3ebb27/modin/pandas/dataframe.py#L2074-L2084
This is an easy fix, also see https://github.com/snowflakedb/snowpark-python/pull/1767
| 1,726,780,817,000 | null | Performance Issue | [
"modin/pandas/dataframe.py:DataFrame.squeeze"
] | [] |
||
ccxt/ccxt | ccxt__ccxt-24388 | f6119ba226704f2907e48c94caa13a767510fcd4 | diff --git a/python/ccxt/base/exchange.py b/python/ccxt/base/exchange.py
index 9b79354f89c5..66f8170154a4 100644
--- a/python/ccxt/base/exchange.py
+++ b/python/ccxt/base/exchange.py
@@ -382,6 +382,7 @@ def __init__(self, config={}):
self.transactions = dict() if self.transactions is None else self.transactions
self.ohlcvs = dict() if self.ohlcvs is None else self.ohlcvs
self.liquidations = dict() if self.liquidations is None else self.liquidations
+ self.myLiquidations = dict() if self.myLiquidations is None else self.myLiquidations
self.currencies = dict() if self.currencies is None else self.currencies
self.options = self.get_default_options() if self.options is None else self.options # Python does not allow to define properties in run-time with setattr
self.decimal_to_precision = decimal_to_precision
| binance myLiquidations uninitialized before accessed
### Operating System
Ubuntu
### Programming Languages
Python
### CCXT Version
4.4.27
### Description
Got following error while watching binance websocket.
### Code
```
2024-11-26 05:56:31,267 - 875 - exchanges.binance - ERROR - 'NoneType' object does not support item assignment
Traceback (most recent call last):
File "/home/sytd/test/src/exchanges/future_exchange_base.py", line 118, in watch_balances
event = await self._exchange.watch_balance()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sytd/test/.venv/lib/python3.11/site-packages/ccxt/pro/binance.py", line 2503, in watch_balance
return await self.watch(url, messageHash, message, type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sytd/test/.venv/lib/python3.11/site-packages/ccxt/async_support/base/ws/fast_client.py", line 27, in handler
self.handle_message(message)
File "/home/sytd/test/.venv/lib/python3.11/site-packages/ccxt/async_support/base/ws/aiohttp_client.py", line 34, in handle_message
self.handle_text_or_binary_message(message.data)
File "/home/sytd/test/.venv/lib/python3.11/site-packages/ccxt/async_support/base/ws/aiohttp_client.py", line 29, in handle_text_or_binary_message
self.on_message_callback(self, decoded)
File "/home/sytd/test/.venv/lib/python3.11/site-packages/ccxt/pro/binance.py", line 3966, in handle_message
method(client, message)
File "/home/sytd/test/.venv/lib/python3.11/site-packages/ccxt/pro/binance.py", line 3396, in handle_order_update
self.handle_my_liquidation(client, message)
File "/home/sytd/test/.venv/lib/python3.11/site-packages/ccxt/pro/binance.py", line 518, in handle_my_liquidation
self.myLiquidations[symbol] = myLiquidations
~~~~~~~~~~~~~~~~~~~^^^^^^^^
TypeError: 'NoneType' object does not support item assignment
```
| Hello @sytranvn,
Thanks for reporting it, we will fix it asap
@sytranvn Btw, what's the best way of reproducing the issue?
I was just listening for balance events only. Maybe place a future position and wait for it to be liquidated. | 1,732,671,237,000 | null | Bug Report | [
"python/ccxt/base/exchange.py:Exchange.__init__"
] | [] |
|
Qiskit/qiskit | Qiskit__qiskit-13554 | b7b26e000cd4baf3dcd28ca2f4607404bf736e2b | diff --git a/qiskit/circuit/parameterexpression.py b/qiskit/circuit/parameterexpression.py
index fe786762c09..feaa0b772c7 100644
--- a/qiskit/circuit/parameterexpression.py
+++ b/qiskit/circuit/parameterexpression.py
@@ -340,7 +340,7 @@ def _apply_operation(
either a constant or a second ParameterExpression.
Args:
- operation: One of operator.{add,sub,mul,truediv}.
+ operation: An operator, such as add, sub, mul, and truediv.
other: The second argument to be used with self in operation.
reflected: Optional - The default ordering is "self operator other".
If reflected is True, this is switched to "other operator self".
| Doc string of `operation` in ParameterExpression._apply_operation
It says
```
operation: One of operator.{add,sub,mul,truediv}.
```
But the function is already called also with other operations, for example `pow` in `ParameterExpression.__pow__`.
| Good point, and while this is not user facing it would be nice to have the dev-facing docs correct. Would you like to open a small PR to fix it? | 1,733,917,262,000 | null | Bug Report | [
"qiskit/circuit/parameterexpression.py:ParameterExpression._apply_operation"
] | [] |
|
Qiskit/qiskit | Qiskit__qiskit-13552 | 17648ebb030c90fa7a595333b61823735275f68f | diff --git a/qiskit/circuit/parameterexpression.py b/qiskit/circuit/parameterexpression.py
index fe786762c09..feaa0b772c7 100644
--- a/qiskit/circuit/parameterexpression.py
+++ b/qiskit/circuit/parameterexpression.py
@@ -340,7 +340,7 @@ def _apply_operation(
either a constant or a second ParameterExpression.
Args:
- operation: One of operator.{add,sub,mul,truediv}.
+ operation: An operator, such as add, sub, mul, and truediv.
other: The second argument to be used with self in operation.
reflected: Optional - The default ordering is "self operator other".
If reflected is True, this is switched to "other operator self".
| Doc string of `operation` in ParameterExpression._apply_operation
It says
```
operation: One of operator.{add,sub,mul,truediv}.
```
But the function is already called also with other operations, for example `pow` in `ParameterExpression.__pow__`.
| Good point, and while this is not user facing it would be nice to have the dev-facing docs correct. Would you like to open a small PR to fix it? | 1,733,909,546,000 | null | Bug Report | [
"qiskit/circuit/parameterexpression.py:ParameterExpression._apply_operation"
] | [] |
|
aio-libs/aiohttp | aio-libs__aiohttp-9767 | 51145aad138d03fc9f462e59b9c9398a75905899 | diff --git a/aiohttp/payload.py b/aiohttp/payload.py
index 27636977774..151f9dd497b 100644
--- a/aiohttp/payload.py
+++ b/aiohttp/payload.py
@@ -101,6 +101,7 @@ def __init__(self) -> None:
self._first: List[_PayloadRegistryItem] = []
self._normal: List[_PayloadRegistryItem] = []
self._last: List[_PayloadRegistryItem] = []
+ self._normal_lookup: Dict[Any, PayloadType] = {}
def get(
self,
@@ -109,12 +110,20 @@ def get(
_CHAIN: "Type[chain[_PayloadRegistryItem]]" = chain,
**kwargs: Any,
) -> "Payload":
+ if self._first:
+ for factory, type_ in self._first:
+ if isinstance(data, type_):
+ return factory(data, *args, **kwargs)
+ # Try the fast lookup first
+ if lookup_factory := self._normal_lookup.get(type(data)):
+ return lookup_factory(data, *args, **kwargs)
+ # Bail early if its already a Payload
if isinstance(data, Payload):
return data
- for factory, type in _CHAIN(self._first, self._normal, self._last):
- if isinstance(data, type):
+ # Fallback to the slower linear search
+ for factory, type_ in _CHAIN(self._normal, self._last):
+ if isinstance(data, type_):
return factory(data, *args, **kwargs)
-
raise LookupError()
def register(
@@ -124,6 +133,11 @@ def register(
self._first.append((factory, type))
elif order is Order.normal:
self._normal.append((factory, type))
+ if isinstance(type, Iterable):
+ for t in type:
+ self._normal_lookup[t] = factory
+ else:
+ self._normal_lookup[type] = factory
elif order is Order.try_last:
self._last.append((factory, type))
else:
@@ -159,7 +173,8 @@ def __init__(
self._headers[hdrs.CONTENT_TYPE] = content_type
else:
self._headers[hdrs.CONTENT_TYPE] = self._default_content_type
- self._headers.update(headers or {})
+ if headers:
+ self._headers.update(headers)
@property
def size(self) -> Optional[int]:
@@ -228,9 +243,6 @@ class BytesPayload(Payload):
def __init__(
self, value: Union[bytes, bytearray, memoryview], *args: Any, **kwargs: Any
) -> None:
- if not isinstance(value, (bytes, bytearray, memoryview)):
- raise TypeError(f"value argument must be byte-ish, not {type(value)!r}")
-
if "content_type" not in kwargs:
kwargs["content_type"] = "application/octet-stream"
@@ -238,8 +250,10 @@ def __init__(
if isinstance(value, memoryview):
self._size = value.nbytes
- else:
+ elif isinstance(value, (bytes, bytearray)):
self._size = len(value)
+ else:
+ raise TypeError(f"value argument must be byte-ish, not {type(value)!r}")
if self._size > TOO_LARGE_BYTES_BODY:
kwargs = {"source": self}
| Payload registry has to do a linear search to find payloads
https://github.com/aio-libs/aiohttp/blob/21f5f92a755dc5ac7225b5e76f561553cf86565e/aiohttp/payload.py#L97
There is a note that its inefficent.
We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
| > We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
Doesn't really work, because it works with isinstance checks, which could be a subclass or an abc. I thought about this previously and didn't think of any improvements. Not sure what zope.interface is though, might be worth looking into.
Maybe if we have a property that is a key thats the same for the subclassed object.. not sure if that will work. Needs some more thought.
Not sure what you mean, the subclasses are e.g. subclass of bytes or (more realistically) a subclass of io.BufferedIO. They are not our classes, just arbitrary types that might be used to store/stream data.
I need to dig at it a bit more to come up with a solution. This one will take a bit of thought so I opened the issue so I don't forget to come back to it.
Looking at this
We can likely store the payload type() and then do a dict lookup on the type() of the incoming and we get no matches because it's subclasses, fallback to the linear search
We don't have a benchmark for simple post requests which makes this a bit more difficult to quantify what the performance drag of the linear search is. We should add a benchmark first | 1,731,233,876,000 | null | Performance Issue | [
"aiohttp/payload.py:PayloadRegistry.__init__",
"aiohttp/payload.py:PayloadRegistry.get",
"aiohttp/payload.py:PayloadRegistry.register",
"aiohttp/payload.py:Payload.__init__",
"aiohttp/payload.py:BytesPayload.__init__"
] | [] |
|
aio-libs/aiohttp | aio-libs__aiohttp-9766 | cc9a14aa3a29e54e2da3045083cca865654e3ff9 | diff --git a/aiohttp/payload.py b/aiohttp/payload.py
index 27636977774..151f9dd497b 100644
--- a/aiohttp/payload.py
+++ b/aiohttp/payload.py
@@ -101,6 +101,7 @@ def __init__(self) -> None:
self._first: List[_PayloadRegistryItem] = []
self._normal: List[_PayloadRegistryItem] = []
self._last: List[_PayloadRegistryItem] = []
+ self._normal_lookup: Dict[Any, PayloadType] = {}
def get(
self,
@@ -109,12 +110,20 @@ def get(
_CHAIN: "Type[chain[_PayloadRegistryItem]]" = chain,
**kwargs: Any,
) -> "Payload":
+ if self._first:
+ for factory, type_ in self._first:
+ if isinstance(data, type_):
+ return factory(data, *args, **kwargs)
+ # Try the fast lookup first
+ if lookup_factory := self._normal_lookup.get(type(data)):
+ return lookup_factory(data, *args, **kwargs)
+ # Bail early if its already a Payload
if isinstance(data, Payload):
return data
- for factory, type in _CHAIN(self._first, self._normal, self._last):
- if isinstance(data, type):
+ # Fallback to the slower linear search
+ for factory, type_ in _CHAIN(self._normal, self._last):
+ if isinstance(data, type_):
return factory(data, *args, **kwargs)
-
raise LookupError()
def register(
@@ -124,6 +133,11 @@ def register(
self._first.append((factory, type))
elif order is Order.normal:
self._normal.append((factory, type))
+ if isinstance(type, Iterable):
+ for t in type:
+ self._normal_lookup[t] = factory
+ else:
+ self._normal_lookup[type] = factory
elif order is Order.try_last:
self._last.append((factory, type))
else:
@@ -159,7 +173,8 @@ def __init__(
self._headers[hdrs.CONTENT_TYPE] = content_type
else:
self._headers[hdrs.CONTENT_TYPE] = self._default_content_type
- self._headers.update(headers or {})
+ if headers:
+ self._headers.update(headers)
@property
def size(self) -> Optional[int]:
@@ -228,9 +243,6 @@ class BytesPayload(Payload):
def __init__(
self, value: Union[bytes, bytearray, memoryview], *args: Any, **kwargs: Any
) -> None:
- if not isinstance(value, (bytes, bytearray, memoryview)):
- raise TypeError(f"value argument must be byte-ish, not {type(value)!r}")
-
if "content_type" not in kwargs:
kwargs["content_type"] = "application/octet-stream"
@@ -238,8 +250,10 @@ def __init__(
if isinstance(value, memoryview):
self._size = value.nbytes
- else:
+ elif isinstance(value, (bytes, bytearray)):
self._size = len(value)
+ else:
+ raise TypeError(f"value argument must be byte-ish, not {type(value)!r}")
if self._size > TOO_LARGE_BYTES_BODY:
kwargs = {"source": self}
| Payload registry has to do a linear search to find payloads
https://github.com/aio-libs/aiohttp/blob/21f5f92a755dc5ac7225b5e76f561553cf86565e/aiohttp/payload.py#L97
There is a note that its inefficent.
We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
| > We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
Doesn't really work, because it works with isinstance checks, which could be a subclass or an abc. I thought about this previously and didn't think of any improvements. Not sure what zope.interface is though, might be worth looking into.
Maybe if we have a property that is a key thats the same for the subclassed object.. not sure if that will work. Needs some more thought.
Not sure what you mean, the subclasses are e.g. subclass of bytes or (more realistically) a subclass of io.BufferedIO. They are not our classes, just arbitrary types that might be used to store/stream data.
I need to dig at it a bit more to come up with a solution. This one will take a bit of thought so I opened the issue so I don't forget to come back to it.
Looking at this
We can likely store the payload type() and then do a dict lookup on the type() of the incoming and we get no matches because it's subclasses, fallback to the linear search
We don't have a benchmark for simple post requests which makes this a bit more difficult to quantify what the performance drag of the linear search is. We should add a benchmark first | 1,731,233,868,000 | null | Performance Issue | [
"aiohttp/payload.py:PayloadRegistry.__init__",
"aiohttp/payload.py:PayloadRegistry.get",
"aiohttp/payload.py:PayloadRegistry.register",
"aiohttp/payload.py:Payload.__init__",
"aiohttp/payload.py:BytesPayload.__init__"
] | [] |
|
aio-libs/aiohttp | aio-libs__aiohttp-9762 | 50cccb3823e53e187723f5dd713e2f1299405d1e | diff --git a/aiohttp/payload.py b/aiohttp/payload.py
index ea50b6a38cb..9979ed269b6 100644
--- a/aiohttp/payload.py
+++ b/aiohttp/payload.py
@@ -101,6 +101,7 @@ def __init__(self) -> None:
self._first: List[_PayloadRegistryItem] = []
self._normal: List[_PayloadRegistryItem] = []
self._last: List[_PayloadRegistryItem] = []
+ self._normal_lookup: Dict[Any, PayloadType] = {}
def get(
self,
@@ -109,12 +110,20 @@ def get(
_CHAIN: "Type[chain[_PayloadRegistryItem]]" = chain,
**kwargs: Any,
) -> "Payload":
+ if self._first:
+ for factory, type_ in self._first:
+ if isinstance(data, type_):
+ return factory(data, *args, **kwargs)
+ # Try the fast lookup first
+ if lookup_factory := self._normal_lookup.get(type(data)):
+ return lookup_factory(data, *args, **kwargs)
+ # Bail early if its already a Payload
if isinstance(data, Payload):
return data
- for factory, type in _CHAIN(self._first, self._normal, self._last):
- if isinstance(data, type):
+ # Fallback to the slower linear search
+ for factory, type_ in _CHAIN(self._normal, self._last):
+ if isinstance(data, type_):
return factory(data, *args, **kwargs)
-
raise LookupError()
def register(
@@ -124,6 +133,11 @@ def register(
self._first.append((factory, type))
elif order is Order.normal:
self._normal.append((factory, type))
+ if isinstance(type, Iterable):
+ for t in type:
+ self._normal_lookup[t] = factory
+ else:
+ self._normal_lookup[type] = factory
elif order is Order.try_last:
self._last.append((factory, type))
else:
@@ -159,7 +173,8 @@ def __init__(
self._headers[hdrs.CONTENT_TYPE] = content_type
else:
self._headers[hdrs.CONTENT_TYPE] = self._default_content_type
- self._headers.update(headers or {})
+ if headers:
+ self._headers.update(headers)
@property
def size(self) -> Optional[int]:
@@ -228,9 +243,6 @@ class BytesPayload(Payload):
def __init__(
self, value: Union[bytes, bytearray, memoryview], *args: Any, **kwargs: Any
) -> None:
- if not isinstance(value, (bytes, bytearray, memoryview)):
- raise TypeError(f"value argument must be byte-ish, not {type(value)!r}")
-
if "content_type" not in kwargs:
kwargs["content_type"] = "application/octet-stream"
@@ -238,8 +250,10 @@ def __init__(
if isinstance(value, memoryview):
self._size = value.nbytes
- else:
+ elif isinstance(value, (bytes, bytearray)):
self._size = len(value)
+ else:
+ raise TypeError(f"value argument must be byte-ish, not {type(value)!r}")
if self._size > TOO_LARGE_BYTES_BODY:
warnings.warn(
| Payload registry has to do a linear search to find payloads
https://github.com/aio-libs/aiohttp/blob/21f5f92a755dc5ac7225b5e76f561553cf86565e/aiohttp/payload.py#L97
There is a note that its inefficent.
We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
| > We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
Doesn't really work, because it works with isinstance checks, which could be a subclass or an abc. I thought about this previously and didn't think of any improvements. Not sure what zope.interface is though, might be worth looking into.
Maybe if we have a property that is a key thats the same for the subclassed object.. not sure if that will work. Needs some more thought.
Not sure what you mean, the subclasses are e.g. subclass of bytes or (more realistically) a subclass of io.BufferedIO. They are not our classes, just arbitrary types that might be used to store/stream data.
I need to dig at it a bit more to come up with a solution. This one will take a bit of thought so I opened the issue so I don't forget to come back to it.
Looking at this
We can likely store the payload type() and then do a dict lookup on the type() of the incoming and we get no matches because it's subclasses, fallback to the linear search
We don't have a benchmark for simple post requests which makes this a bit more difficult to quantify what the performance drag of the linear search is. We should add a benchmark first | 1,731,230,347,000 | null | Performance Issue | [
"aiohttp/payload.py:PayloadRegistry.__init__",
"aiohttp/payload.py:PayloadRegistry.get",
"aiohttp/payload.py:PayloadRegistry.register",
"aiohttp/payload.py:Payload.__init__",
"aiohttp/payload.py:BytesPayload.__init__"
] | [] |
|
langchain-ai/langgraph | langchain-ai__langgraph-2735 | 083a14c2c5bc90d597dd162219d1006a723abdf0 | diff --git a/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py b/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py
index f8d280b96..ff5a91f5d 100644
--- a/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py
+++ b/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py
@@ -438,28 +438,36 @@ def _msgpack_default(obj: Any) -> Union[str, msgpack.ExtType]:
def _msgpack_ext_hook(code: int, data: bytes) -> Any:
if code == EXT_CONSTRUCTOR_SINGLE_ARG:
try:
- tup = msgpack.unpackb(data, ext_hook=_msgpack_ext_hook)
+ tup = msgpack.unpackb(
+ data, ext_hook=_msgpack_ext_hook, strict_map_key=False
+ )
# module, name, arg
return getattr(importlib.import_module(tup[0]), tup[1])(tup[2])
except Exception:
return
elif code == EXT_CONSTRUCTOR_POS_ARGS:
try:
- tup = msgpack.unpackb(data, ext_hook=_msgpack_ext_hook)
+ tup = msgpack.unpackb(
+ data, ext_hook=_msgpack_ext_hook, strict_map_key=False
+ )
# module, name, args
return getattr(importlib.import_module(tup[0]), tup[1])(*tup[2])
except Exception:
return
elif code == EXT_CONSTRUCTOR_KW_ARGS:
try:
- tup = msgpack.unpackb(data, ext_hook=_msgpack_ext_hook)
+ tup = msgpack.unpackb(
+ data, ext_hook=_msgpack_ext_hook, strict_map_key=False
+ )
# module, name, args
return getattr(importlib.import_module(tup[0]), tup[1])(**tup[2])
except Exception:
return
elif code == EXT_METHOD_SINGLE_ARG:
try:
- tup = msgpack.unpackb(data, ext_hook=_msgpack_ext_hook)
+ tup = msgpack.unpackb(
+ data, ext_hook=_msgpack_ext_hook, strict_map_key=False
+ )
# module, name, arg, method
return getattr(getattr(importlib.import_module(tup[0]), tup[1]), tup[3])(
tup[2]
@@ -468,7 +476,9 @@ def _msgpack_ext_hook(code: int, data: bytes) -> Any:
return
elif code == EXT_PYDANTIC_V1:
try:
- tup = msgpack.unpackb(data, ext_hook=_msgpack_ext_hook)
+ tup = msgpack.unpackb(
+ data, ext_hook=_msgpack_ext_hook, strict_map_key=False
+ )
# module, name, kwargs
cls = getattr(importlib.import_module(tup[0]), tup[1])
try:
@@ -479,7 +489,9 @@ def _msgpack_ext_hook(code: int, data: bytes) -> Any:
return
elif code == EXT_PYDANTIC_V2:
try:
- tup = msgpack.unpackb(data, ext_hook=_msgpack_ext_hook)
+ tup = msgpack.unpackb(
+ data, ext_hook=_msgpack_ext_hook, strict_map_key=False
+ )
# module, name, kwargs, method
cls = getattr(importlib.import_module(tup[0]), tup[1])
try:
| msgpack deserialization with strictmap_key=False
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import asyncio
from typing import Annotated, TypedDict
from langgraph.constants import END
from langgraph.graph import StateGraph
from pydantic import BaseModel
from app.config.settings import get_settings
from app.utils.redis_checkpointer import AsyncRedisSaver
class CitationBroker(BaseModel):
map_idx_to_utt: dict[int, int]
class AgentState(TypedDict):
citation_brokers: list[CitationBroker]
def f1(state):
cit_b = CitationBroker(
map_idx_to_utt={
1: 1,
2: 2,
3: 3
})
"""
With
map_idx_to_utt={
'1': 1,
'2': 2,
'3': 3
})
and
class CitationBroker(BaseModel):
map_idx_to_utt: dict[str, int]
works
"""
print(str(cit_b)) # not None
return {
"citation_brokers": state.get('citation_brokers', []) + [cit_b],
}
def ask_human_node(state):
print("get user input")
builder = StateGraph(AgentState)
builder.add_node("node_1", f1)
builder.add_node("ask_human_node", ask_human_node)
builder.set_entry_point("node_1")
builder.add_edge("ask_human_node", "node_1")
builder.add_edge("node_1", "ask_human_node")
settings = get_settings()
async def main():
async with AsyncRedisSaver.from_url(settings.CACHE_REDIS_ENDPOINT) as memory:
graph = builder.compile(checkpointer=memory, interrupt_before=["ask_human_node"])
thread = {
"configurable": {
"thread_id": "1"
}
}
async for event in graph.astream_events({
"citation_brokers": [],
}, config=thread, version="v2"):
pass
snapshot = await graph.aget_state(thread)
print(str(snapshot.values['citation_brokers'][0])) # None
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```shell
File "msgpack\\_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb ValueError: int is not allowed for map key when strict_map_key=True
```
### Description
I upgraded libraries from **langgraph** 0.2.19 to **0.2.58** and **langgraph-checkpoint** from 1.0.9 to **2.0.8**.
I'm using a REDIS checkpointer as detailed in [the official guide](https://langchain-ai.github.io/langgraph/how-tos/persistence_redis/).
I'm serializing a TypedDict which contains Pydantic V2 objects as values (keys are strings). Each of this Pydantic V2 objects contains a simple Python dict() (whose keys are _numeric_).
When I try to deserialize the Pydantic object I get the following error:
> File "msgpack\\_unpacker.pyx", line 194, in msgpack._cmsgpack.unpackb ValueError: int is not allowed for map key when strict_map_key=True
Setting `strict_map_key=False` inside _jsonplus.py_ solves the issue, but this implies cloning _jsonplus.py_ just to set `strict_map_key=False`.
Indeed at line 210 of _jsonplus.py_ I find:
```
elif type_ == "msgpack":
return msgpack.unpackb(
data_, ext_hook=_msgpack_ext_hook, strict_map_key=False
)
```
but at line 482 of _jsonplus.py_:
```
elif code == EXT_PYDANTIC_V2:
try:
tup = msgpack.unpackb(data, ext_hook=_msgpack_ext_hook) # lacks of strict_map_key=False
# module, name, kwargs, method
cls = getattr(importlib.import_module(tup[0]), tup[1])
try:
return cls(**tup[2])
except Exception:
return cls.model_construct(**tup[2])
except Exception:
return
```
Any advice on how should I fix the problem? In the meantime I reverted to previous version of libraries (which solves the issue).
Thanks in advance.
### System Info
Python 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] on win32
| 1,734,019,110,000 | null | Bug Report | [
"libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py:_msgpack_ext_hook"
] | [] |
||
langchain-ai/langgraph | langchain-ai__langgraph-2724 | ff3bc2f9821d9dffe5d1a8fcf6eb1758f3715da8 | diff --git a/libs/langgraph/langgraph/prebuilt/tool_node.py b/libs/langgraph/langgraph/prebuilt/tool_node.py
index d3d0751e2..e2ac50b8e 100644
--- a/libs/langgraph/langgraph/prebuilt/tool_node.py
+++ b/libs/langgraph/langgraph/prebuilt/tool_node.py
@@ -297,7 +297,7 @@ def _run_one(
try:
input = {**call, **{"type": "tool_call"}}
- response = self.tools_by_name[call["name"]].invoke(input)
+ response = self.tools_by_name[call["name"]].invoke(input, config)
# GraphInterrupt is a special exception that will always be raised.
# It can be triggered in the following scenarios:
@@ -352,7 +352,7 @@ async def _arun_one(
try:
input = {**call, **{"type": "tool_call"}}
- response = await self.tools_by_name[call["name"]].ainvoke(input)
+ response = await self.tools_by_name[call["name"]].ainvoke(input, config)
# GraphInterrupt is a special exception that will always be raised.
# It can be triggered in the following scenarios:
| Langgraph 0.2.58 resulted in empty config passed to tools in langgraph
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
# Graph setup
async with get_checkpoints_pool().connection() as conn:
checkpointer = AsyncPostgresSaver(conn=conn) # type: ignore
graph.checkpointer = checkpointer
runnable_config = RunnableConfig(
configurable={
"thread_id": str(conversation_id),
... # other fields
},
recursion_limit=80,
)
try:
events = graph.astream(
{"messages": ("user", message_content)},
runnable_config,
stream_mode="updates",
)
...
# typical tool
@tool
async def some_tool(input: str, config: RunnableConfig):
thread_id = config.get("configurable", {}).get("thread_id") # <--- this returns nothing
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Today, a change to our poetry lock file resulted in our CI installing the latest minor version of langgraph, going from 0.2.39 to 0.2.58.
Our application passes in the RunnableConfig into each tool, and we use that to get relevant information which is critical for each tool. This worked fine before the upgrade, but after upgrading to 0.2.58 the config passed to our tools was suddenly blank. No configurable, no metadata: `{'tags': [], 'metadata': {}, 'callbacks': None, 'recursion_limit': 25, 'configurable': {}}`
When going back to 0.2.39 the issue is not there, and the config contains all data we pass in when initializing the graph.
Isolating only installing either 0.2.39 or 0.2.58 toggles the error, and results in the following dependencies also changing (we suspect it might be related to the checkpointer)
```
poetry add langgraph@0.2.58
Updating dependencies
Resolving dependencies... (2.0s)
Package operations: 0 installs, 4 updates, 1 removal
- Removing httpx-sse (0.4.0)
- Updating langchain-core (0.3.13 -> 0.3.24)
- Updating langgraph-checkpoint (2.0.2 -> 2.0.8)
- Updating langgraph-sdk (0.1.35 -> 0.1.43)
- Updating langgraph (0.2.39 -> 0.2.58)
Writing lock file
```
I know the example is not reproducible, but it was too much work to actually set up an example given db dependencies etc, so I hope the provided example illustrates well enough what the issue is. I still wanted to report this however, as it was a huge breaking change for us, which definitely should not be the case with a minor upgrade.
### System Info
System Information (0.2.58)
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:35:29 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6000
> Python Version: 3.10.13 (main, Sep 11 2023, 15:00:52) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.3.24
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_anthropic: 0.1.20
> langchain_openai: 0.2.4
> langchain_text_splitters: 0.3.0
> langchainhub: 0.1.21
> langgraph_sdk: 0.1.43
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> anthropic: 0.28.1
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.52.2
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2
----------------------
System Information (0.2.39)
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:35:29 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6000
> Python Version: 3.10.13 (main, Sep 11 2023, 15:00:52) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.3.13
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_anthropic: 0.1.20
> langchain_openai: 0.2.4
> langchain_text_splitters: 0.3.0
> langchainhub: 0.1.21
> langgraph: 0.2.39
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> anthropic: 0.28.1
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.27.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.2
> langgraph-sdk: 0.1.35
> numpy: 1.26.4
> openai: 1.52.2
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2
| How are you invoking the tools?
We bind the tools to a gpt-4-o agent
```python
llm = AzureChatOpenAI(
azure_deployment="gpt4o",
api_version="2024-06-01",
temperature=0,
timeout=120,
)
self.runnable = prompt | llm.bind_tools(tools)
runnable_input = {
**state,
"company": company,
}
result: AIMessage = await self.runnable.ainvoke(runnable_input) # type: ignore
```
We're experiencing the same problem.
Actually, the breaking change is in `0.2.57` (also released yesterday), `0.2.56` works as expected.
ah, it's a `langchain-core` issue, cc @baskaryan . we will investigate! | 1,733,952,975,000 | null | Bug Report | [
"libs/langgraph/langgraph/prebuilt/tool_node.py:ToolNode._run_one",
"libs/langgraph/langgraph/prebuilt/tool_node.py:ToolNode._arun_one"
] | [] |
|
langchain-ai/langgraph | langchain-ai__langgraph-2571 | c6fe26510e814e1cf165bc957b42bf4d5adf789b | diff --git a/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py b/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py
index 440cb452e..4c0f5295c 100644
--- a/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py
+++ b/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py
@@ -5,7 +5,6 @@
from langchain_core.runnables import RunnableConfig
from psycopg import AsyncConnection, AsyncCursor, AsyncPipeline, Capabilities
-from psycopg.errors import UndefinedTable
from psycopg.rows import DictRow, dict_row
from psycopg.types.json import Jsonb
from psycopg_pool import AsyncConnectionPool
@@ -81,17 +80,15 @@ async def setup(self) -> None:
the first time checkpointer is used.
"""
async with self._cursor() as cur:
- try:
- results = await cur.execute(
- "SELECT v FROM checkpoint_migrations ORDER BY v DESC LIMIT 1"
- )
- row = await results.fetchone()
- if row is None:
- version = -1
- else:
- version = row["v"]
- except UndefinedTable:
+ await cur.execute(self.MIGRATIONS[0])
+ results = await cur.execute(
+ "SELECT v FROM checkpoint_migrations ORDER BY v DESC LIMIT 1"
+ )
+ row = await results.fetchone()
+ if row is None:
version = -1
+ else:
+ version = row["v"]
for v, migration in zip(
range(version + 1, len(self.MIGRATIONS)),
self.MIGRATIONS[version + 1 :],
| langgraph-checkpoint-postgres: Calls to postgres async checkpointer setup() fail on new postgres db
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
# set POSTGRES_URI=...
class CheckpointerManager:
"""
A manager class to handle checkpointer initialization and lifecycle.
"""
def __init__(self, conninfo: str, max_pool_size: int = 20):
self.conninfo = conninfo
self.max_pool_size = max_pool_size
self.pool = None
self.checkpointer = None
async def setup(self):
"""
Initialize the connection pool and checkpointer.
"""
self.pool = AsyncConnectionPool(conninfo=self.conninfo,
max_size=self.max_pool_size,
open=False,timeout=5)
await self.pool.open(wait=True, timeout=5)
self.checkpointer = AsyncPostgresSaver(conn=self.pool)
try:
await self.checkpointer.setup()
except Exception as e:
print(f"Error setting up checkpointer: {e}")
await self.close()
raise e
return self
async def close(self):
"""
Close the connection pool and cleanup resources.
"""
if self.pool:
await self.pool.close()
def get_checkpointer(self):
"""
Get the initialized checkpointer.
"""
if not self.checkpointer:
raise RuntimeError("Checkpointer has not been initialized. Call setup() first.")
return self.checkpointer
checkpointer_manager = CheckpointerManager(os.getenv(POSTGRES_URI))
checkpointer_manager = asyncio.run(checkpointer_manager.setup())
```
```
### Error Message and Stack Trace (if applicable)
```shell
setting up checkpointer: current transaction is aborted, commands ignored until end of transaction block
Traceback (most recent call last):
File "/home/tai/project/main.py", line 91, in <module>
main()
File "/home/tai/project/main.py", line 49, in main
checkpointer_manager = asyncio.run(checkpointer_manager.setup())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/tai/project/checkpointer/__init__.py", line 31, in setup
raise e
File "/home/tai/project/checkpointer/__init__.py", line 27, in setup
await self.checkpointer.setup()
File "/home/tai/project/.venv/lib/python3.12/site-packages/langgraph/checkpoint/postgres/aio.py", line 98, in setup
await cur.execute(migration)
File "/home/tai/project/.venv/lib/python3.12/site-packages/psycopg/cursor_async.py", line 97, in execute
raise ex.with_traceback(None)
psycopg.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block
```
### Description
When running the async postgres setup() function, the first execution for SELECT fails, but the cursor is reused in the except block, so postgres ignores it. The CREATE IF NOT EXISTS statement should be hoisted up and out of the main block such that it runs in a separate transaction context, or a least before the SELECT statement runs. For now, I've got it fixed by running this manually:
```
class CheckpointerManager:
"""
A manager class to handle checkpointer initialization and lifecycle.
"""
initialization = """CREATE TABLE IF NOT EXISTS checkpoint_migrations (
v INTEGER PRIMARY KEY
);"""
def __init__(self, conninfo: str, max_pool_size: int = 20):
self.conninfo = conninfo
self.max_pool_size = max_pool_size
self.pool = None
self.checkpointer = None
async def setup(self):
"""
Initialize the connection pool and checkpointer.
"""
self.pool = AsyncConnectionPool(conninfo=self.conninfo,
max_size=self.max_pool_size,
open=False,timeout=5)
await self.pool.open(wait=True, timeout=5)
self.checkpointer = AsyncPostgresSaver(conn=self.pool)
async with self.pool.connection() as conn:
await conn.execute(self.initialization)
try:
await self.checkpointer.setup()
except Exception as e:
print(f"Error setting up checkpointer: {e}")
await self.close()
raise e
return self
async def close(self):
"""
Close the connection pool and cleanup resources.
"""
if self.pool:
await self.pool.close()
def get_checkpointer(self):
"""
Get the initialized checkpointer.
"""
if not self.checkpointer:
raise RuntimeError("Checkpointer has not been initialized. Call setup() first.")
return self.checkpointer
checkpointer_manager = CheckpointerManager(os.getenv(POSTGRES_URI))
checkpointer_manager = asyncio.run(checkpointer_manager.setup())
```
(note the `async with self.pool.connection() as conn: ; await conn.execute(self.initialization)` before the `.setup()` call. This fixes it.
### System Info
pip freeze | grep langgraph
langgraph==0.2.53
langgraph-checkpoint==2.0.6
langgraph-checkpoint-postgres==2.0.4
langgraph-checkpoint-sqlite==2.0.1
langgraph-sdk==0.1.36
| 1,732,788,364,000 | null | Bug Report | [
"libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py:AsyncPostgresSaver.setup"
] | [] |
||
sktime/sktime | sktime__sktime-7417 | d7f582335197b9c1382d33e40c4dbe1dbae14137 | diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py
index b7efd883810..675d45bb05a 100644
--- a/sktime/forecasting/base/adapters/_statsmodels.py
+++ b/sktime/forecasting/base/adapters/_statsmodels.py
@@ -53,6 +53,11 @@ def _fit(self, y, X, fh):
-------
self : returns an instance of self.
"""
+ # save info needed for _predict: should these be saved to self._y_metdata?
+ self._y_len = len(y)
+ self._y_first_index = y.index[0]
+ self._set_cutoff_from_y(y)
+
# statsmodels does not support the pd.Int64Index as required,
# so we coerce them here to pd.RangeIndex
if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):
@@ -104,8 +109,8 @@ def _predict(self, fh, X):
"""
# statsmodels requires zero-based indexing starting at the
# beginning of the training series when passing integers
- start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
- fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
+ start, end = fh.to_absolute_int(self._y_first_index, self.cutoff)[[0, -1]]
+ fh_int = fh.to_absolute_int(self._y_first_index, self.cutoff) - self._y_len
# bug fix for evaluate function as test_plus_train indices are passed
# statsmodels exog must contain test indices only.
@@ -130,7 +135,7 @@ def _predict(self, fh, X):
y_pred = y_pred.iloc[fh_int]
# ensure that name is not added nor removed
# otherwise this may upset conversion to pd.DataFrame
- y_pred.name = self._y.name
+ y_pred.name = self._get_varnames()[0]
return y_pred
@staticmethod
@@ -195,8 +200,8 @@ def _predict_interval(self, fh, X, coverage):
if not implements_interval_adapter and implements_quantiles:
return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)
- start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
- fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
+ start, end = fh.to_absolute_int(self._y_first_index, self.cutoff)[[0, -1]]
+ fh_int = fh.to_absolute_int(self._y_first_index, self.cutoff) - self._y_len
# if fh > 1 steps ahead of cutoff
fh_int = fh_int - fh_int[0]
| [BUG] Unable to use _StatsModelsAdapter.predict if config remember_data=False
**Describe the bug**
If the config `remember_data=False` is set then `_StatsModelsAdapter._predict` will throw an error when trying to use `_y` and `_X`.
**To Reproduce**
```python
import numpy as np
from sktime.forecasting.sarimax import SARIMAX
forecaster = SARIMAX()
forecaster.set_config(remember_data=False)
y = np.ones(10)
forecaster.fit(y)
forecaster.predict(fh=11)
```
**Expected behavior**
`_StatsModelsAdapter` forecasters should be able to `predict` with config `remember_data=False`.
Related to #6914
| 1,732,046,055,000 | null | Bug Report | [
"sktime/forecasting/base/adapters/_statsmodels.py:_StatsModelsAdapter._fit",
"sktime/forecasting/base/adapters/_statsmodels.py:_StatsModelsAdapter._predict",
"sktime/forecasting/base/adapters/_statsmodels.py:_StatsModelsAdapter._predict_interval"
] | [] |
||
sktime/sktime | sktime__sktime-7404 | 193ec0118dc0d062b2eac81511bcb13f6dc08a67 | diff --git a/sktime/forecasting/croston.py b/sktime/forecasting/croston.py
index 4aee85f8c86..fe7956dc428 100644
--- a/sktime/forecasting/croston.py
+++ b/sktime/forecasting/croston.py
@@ -77,6 +77,7 @@ class Croston(BaseForecaster):
# estimator type
# --------------
"requires-fh-in-fit": False, # is forecasting horizon already required in fit?
+ "ignores-exogeneous-X": True,
}
def __init__(self, smoothing=0.1):
| [BUG] Croston cannot handle exogenous variables.
**Describe the bug**
Croston cannot handle exogenous variables, but is marked as if it can.
**To Reproduce**
Example returns the tag "ignores-exogeneous-X".
```python
from sktime.forecasting.croston import Croston
Croston.get_class_tag("ignores-exogeneous-X")
```
**Expected behavior**
The returned tag "ignores-exogeneous-X" should be True and not False.
**Versions**
0.30.1
| Agreed - very easy fix, would you (or someone else) like to fix it with a PR?
@fkiraly
I would like to work on this issue
Kindly assign this one to me..
Thanks
@kdekker-private
The issue lies since the parameter `ignores-exogeneous-X` is by default set to **True**
Yes, so you need to override the default in the concrete class, by adding it to the dict there. | 1,731,945,409,000 | null | Bug Report | [
"sktime/forecasting/croston.py:Croston"
] | [] |
|
numpy/numpy | numpy__numpy-27598 | a905925ef40a7551d16d78d81c7e6d08b59559e4 | diff --git a/numpy/ctypeslib.py b/numpy/ctypeslib.py
index 370cdf224cdc..d11b9dcb43d3 100644
--- a/numpy/ctypeslib.py
+++ b/numpy/ctypeslib.py
@@ -527,6 +527,26 @@ def as_array(obj, shape=None):
The shape parameter must be given if converting from a ctypes POINTER.
The shape parameter is ignored if converting from a ctypes array
+
+ Examples
+ --------
+ Converting a ctypes integer array:
+
+ >>> import ctypes
+ >>> ctypes_array = (ctypes.c_int * 5)(0, 1, 2, 3, 4)
+ >>> np_array = np.ctypeslib.as_array(ctypes_array)
+ >>> np_array
+ array([0, 1, 2, 3, 4], dtype=int32)
+
+ Converting a ctypes POINTER:
+
+ >>> import ctypes
+ >>> buffer = (ctypes.c_int * 5)(0, 1, 2, 3, 4)
+ >>> pointer = ctypes.cast(buffer, ctypes.POINTER(ctypes.c_int))
+ >>> np_array = np.ctypeslib.as_array(pointer, (5,))
+ >>> np_array
+ array([0, 1, 2, 3, 4], dtype=int32)
+
"""
if isinstance(obj, ctypes._Pointer):
# convert pointers to an array of the desired shape
@@ -541,8 +561,31 @@ def as_array(obj, shape=None):
def as_ctypes(obj):
- """Create and return a ctypes object from a numpy array. Actually
- anything that exposes the __array_interface__ is accepted."""
+ """
+ Create and return a ctypes object from a numpy array. Actually
+ anything that exposes the __array_interface__ is accepted.
+
+ Examples
+ --------
+ Create ctypes object from inferred int ``np.array``:
+
+ >>> inferred_int_array = np.array([1, 2, 3])
+ >>> c_int_array = np.ctypeslib.as_ctypes(inferred_int_array)
+ >>> type(c_int_array)
+ <class 'c_long_Array_3'>
+ >>> c_int_array[:]
+ [1, 2, 3]
+
+ Create ctypes object from explicit 8 bit unsigned int ``np.array`` :
+
+ >>> exp_int_array = np.array([1, 2, 3], dtype=np.uint8)
+ >>> c_int_array = np.ctypeslib.as_ctypes(exp_int_array)
+ >>> type(c_int_array)
+ <class 'c_ubyte_Array_3'>
+ >>> c_int_array[:]
+ [1, 2, 3]
+
+ """
ai = obj.__array_interface__
if ai["strides"]:
raise TypeError("strided arrays not supported")
| DOC: Examples in docstrings – tracking issue
"Examples, more examples, and more detailed examples" is the recurrent theme in the feedback about the NumPy documentation we received via the 2020 and 2021 NumPy user surveys.
If you come across a docstring where a function is missing an example or more/better examples are needed, please add a comment below.
**Re: the “sprintable” label** This could be a good activity for a sprint with new contributors as most of them are NumPy users. However, one doesn’t have to be a sprint participant to contribute to this initiative.
| how can I help in this issue ?
Take a look at the functions you use from NumPy, and see if the docstrings have examples similar to your non-trivial use cases. If not, comment below asking if it seems an example would help newer users figure out how to replicate what you do.
This could be a good activity for a sprint with new contributors as most of them are NumPy users. I’ll add the “sprint” label to the issue. However, one doesn’t have to be a sprint participant to contribute to this initiative.
Hi everyone!
One thing that could also be tracked is whether the existing examples in docstrings are following the standards listed in the documentation. I've found an example in `fromnumeric.py` (lines 171-189) that I'm not entirely sure that are in accordance to the documentation.
I'm willing to help on that.
Hi, Just wanted to confirm. If the general DOCS (examples for various functions) are in the scope of this Issue or we are currently tracking only the DOC Strings.
I do see a lot of additions that can be done to the `numpy.random.generated` page
[Link under discussion](https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.integers.html)
I was talking to @InessaPawson about that. I do believe that this issue could be, in fact, split into several different ones. Even the docstrings, this could be split in tracking down the ones that are there, the ones that are not, the ones that follow the standards, et cetera.
We can use some examples in polynomial.py. What do you think about the following two (one for polymulx, the other for polyval2d):
```python
import numpy as np
from numpy.polynomial import polynomial as P
c=(-2, 0, 1)
P.polymulx(c)
#array([-0., -2., 0., 1.])
from numpy.polynomial.polynomial import polyval2d
a=polyval2d(2, -3j, [(-1j,1.5,2), (1,0.2j,0)]) #This is -1jx⁰y⁰ + 1.5x⁰y¹ + 2x⁰y² + 1x¹y⁰ + 0.2jx¹y¹ + 0x¹y², evaluated at x = 2 and y=-3j
#(-14.799999999999997-5.5j)
```
> Just wanted to confirm. If the general DOCS (examples for various functions) are in the scope of this Issue or we are currently tracking only the DOC Strings.
@bhavukkalra This issue is only for examples in docstrings. Please file a separate issue for examples in other parts of the NumPy documentation.
@dbrs01 this looks good - Would you like to submit a PR?
Thank you Melisssa. I appreciate your feedback. I will do it (later) and
will continue including more examples. I have not been able to attend
meetings recently because I started a new job this past Monday and am in
the process of doing paperwork, installing my machine, going to the office
(no telework) and trying to get the most from my coworker from whom I
inherit tasks and whose last day of work is on May 30. I hope to be back
soon. See you!
On Sat, May 14, 2022 at 7:13 AM Melissa Weber Mendonça <
***@***.***> wrote:
> @dbrs01 <https://github.com/dbrs01> this looks good - Would you like to
> submit a PR?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/numpy/numpy/issues/21351#issuecomment-1126714170>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABXBEHY2CZLX6V47USVDDZLVJ6RJJANCNFSM5TTIU3GA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
--
Dilia B. Rueda S.
I can help with the PRs if you feel this is something that could help. Feel free to post the examples here and I can submit the PRs.
Some time ago I wrote a script to find SciPy functions whose docstrings are missing the "Examples" section; there is a copy of that script in https://github.com/scipy/scipy/issues/7168. (*Edit*: the script for SciPy, [`find_functions_missing_examples.py`](https://github.com/WarrenWeckesser/analyze-scipy-code/blob/main/find_functions_missing_examples.py) is now maintained in my [`analyze-scipy-code`](https://github.com/WarrenWeckesser/analyze-scipy-code) repository on github.)
I just created a gist with a variation of that script that has been modified for NumPy: https://gist.github.com/WarrenWeckesser/c33d0236279cc5d73843f0497e14ed0e. (There is a lot that could be done with such a script; hack away on your own copy if you feel inspired.)
Here's the output when I run that script with the development version (main branch) of NumPy. Note that some of these functions have very short docstrings that say something like "See $SOMEOTHERFUNC for details." This is probably intentional, and these are not good candidates for new contributors or for use in a sprint. Also, it looks like some of these might be "namespace pollution": names from the main numpy namespace that also ended up in a submodule. When in doubt, ask the NumPy devs to confirm that a function should get an "Examples" section before you spend time working on it.
```
NumPy version 1.24.0.dev0+427.gf9bed20bf
np (22)
add_docstring
add_newdoc
add_newdoc_ufunc
alltrue
copyto
cumproduct
deprecate_with_doc
diag_indices_from
fastCopyAndTranspose
get_array_wrap
get_include
get_printoptions
getbufsize
msort
product
recfromcsv
recfromtxt
round_
setbufsize
sometrue
tril_indices_from
triu_indices_from
np.char (40)
add
array
asarray
center
encode
equal
expandtabs
find
greater
greater_equal
index
isalnum
isalpha
isdecimal
isdigit
islower
isnumeric
isspace
istitle
isupper
join
less
less_equal
ljust
mod
multiply
not_equal
partition
replace
rfind
rindex
rjust
rpartition
rsplit
split
splitlines
startswith
str_len
translate
zfill
np.ctypeslib (4)
as_array
as_ctypes
as_ctypes_type
load_library
np.ma (26)
compress_cols
compress_nd
compress_rows
compressed
convolve
correlate
cov
diag
ediff1d
in1d
isin
left_shift
max
min
power
put
putmask
reshape
right_shift
round
round_
setxor1d
sort
take
union1d
unique
np.random.Generator (2)
beta
exponential
Found 94 functions
```
FWIW this check is also now built-in to the numpydoc docstring validator, which can be run during a doc build. For example, when building the docs locally you can add `numpydoc_validation_checks = {"EX01"}` to `conf.py` to automatically report this info during the sphinx-build process.
@WarrenWeckesser Thank you so much for sharing this information! I’ll file a separate issue for each function after consulting with our core developers.
We should rerun the script
The old script was missing some functions, which I think is because of the issues related to _ArrayFunctionDispatcher from last year. For example https://github.com/numpy/numpy/issues/23032 and https://github.com/numpy/numpy/issues/23307.
I added the following lines to the script. EDIT: Found better way, using `inspect.isroutine`.
```
import inspect
```
Then I replaced `funcs` with
```
funcs = [item for item in objects
if inspect.isroutine(item[1])]
```
Here is the new list:
```
np (12)
amax
amin
around
getbufsize
matrix_transpose
setbufsize
show_config
show_runtime
unique_all
unique_counts
unique_inverse
unique_values
np.char (7)
array
isalpha
isspace
mod
rfind
splitlines
startswith
np.ctypeslib (4)
as_array
as_ctypes
as_ctypes_type
load_library
np.lib (2)
add_docstring
add_newdoc
np.linalg (10)
cross
diagonal
matmul
matrix_norm
matrix_transpose
outer
svdvals
trace
vecdot
vector_norm
np.ma (6)
convolve
correlate
left_shift
put
reshape
take
np.rec (1)
find_duplicate
Found 42 functions
```
Prior to the change, there were only 24. I noticed the script was missing `svdvals`, which is why I started working on it. My hope is to have the POSSEE team fill in all of these (that need it) by next week.
~~Unfortunately, the type `<class 'numpy._ArrayFunctionDispatcher'>` also captures a few classes that don't need examples. I spent a couple hours reading about the issues last year and changes made related to `_ArrayFunctionDispatcher`. Maybe someone has an idea of a more elegant fix to the script than what I provided above.~~ Found a better way.
I tried adding `numpydoc_validation_checks = {"EX01"}` to `conf.py`, but was unable to see where/how this generated the needed report. I'd love advice on that.
> I tried adding numpydoc_validation_checks = {"EX01"} to conf.py, but was unable to see where/how this generated the needed report. I'd love advice on that.
When you add `numpydoc_validation_checks = {"EX01"}` to conf.py the validation warnings appear in the sphinx build log. To capture the build log in a file, you can do something like:
`make html 2>&1 | tee buildlog.txt`
If you want to filter the log so that only the warnings associated with the validation checks are stored, try:
`make html 2>&1 | tee >(grep -B1 -A1 "EX01" >> buildlog.txt )`
I'll share the output as a file [buildlog.log](https://github.com/numpy/numpy/files/15408328/buildlog.log), rather than copy/paste the output here. There are 1808 items.
I modified the script above to use `inspect.isroutine` The results are slightly different than capturing `numpy._ArrayFunctionDispatcher` and then using `isinstance`. [Here I the updated scripts](https://github.com/bmwoodruff/numpy-ideas/blob/main/find_numpy_functions_missing_examples.ipynb), along with the output in a Jupyter notebook.
While working on this, I noticed that there were several functions on this list that had docstrings that were not published on the web version of the docs. [This script](https://github.com/bmwoodruff/numpy-ideas/blob/main/locate-missing-docs.ipynb) uses the `object.inv` file from a doc build and compares each routine against that list. I'll be including a PR in a bit with the missing `ma` links (they don't show on the attached link, because I added them). There are 5 functions from the new `strings` module that need to be added as well. I can tackle unless someone else was working on that already (I know `strings` is new in 2.0).
Created [gist of the script](https://gist.github.com/luxedo/c7ad85b8848136671d126cd7baa07990) to run directly into a python shell.
# Missing Docstrings
### np (12)
- [x] amax - Alias of max. Are examples needed?
- [x] amin - Alias of min
- [x] around - Alias of round
- [x] getbufsize
- [x] matrix_transpose
- [x] setbufsize
- [ ] show_config
- [ ] show_runtime
- [x] unique_all
- [x] unique_counts
- [x] unique_inverse
- [x] unique_values
### np.char (7)
- [x] array
- [ ] isalpha
- [ ] isspace
- [ ] mod
- [ ] rfind
- [ ] splitlines
- [ ] startswith
### np.ctypeslib (4)
- [ ] as_array
- [ ] as_ctypes
- [ ] as_ctypes_type
- [ ] load_library
### np.lib (2)
- [ ] add_docstring
- [ ] add_newdoc
### np.linalg (10)
- [x] cross
- [x] diagonal
- [x] matmul
- [x] matrix_norm
- [x] matrix_transpose
- [x] outer
- [x] svdvals
- [x] trace
- [x] vecdot
- [x] vector_norm
### np.ma (6)
- [ ] convolve
- [ ] correlate
- [ ] left_shift
- [ ] put
- [ ] reshape
- [x] take
### np.rec (1)
- [ ] find_duplicate
I sent too many PRs already. I'll wait for review to add more.
@luxedo If you put `[skip actions][skip azp][skip cirrus]` in your [commit message](https://numpy.org/devdocs/dev/development_workflow.html#writing-the-commit-message), then proper GitHub actions will be skipped and your tests should pass.
Thank you! My bad.
I should not use `[skip circle]` right? The doctests run there?
Correct. They have to be in the commit message as well. Putting them in the comments of the PR is not sufficient (learned that myself a few weeks ago). Any time you add more changes in another commit, you have to add these tags to the commit message to make sure the correct CI jobs are run.
Also please try to combine small example additions together into one PR
I did like to add examples for the numpy.char.array, currently working on it.
Adding example code snippets to np.char.isspace function, currently in the process of adding it
adding example code snippets to np.ma.convolve function currently in the process of adding it
| 1,729,353,573,000 | null | Feature Request | [
"numpy/ctypeslib.py:as_array",
"numpy/ctypeslib.py:as_ctypes"
] | [] |
|
numpy/numpy | numpy__numpy-27595 | a905925ef40a7551d16d78d81c7e6d08b59559e4 | diff --git a/numpy/lib/_function_base_impl.py b/numpy/lib/_function_base_impl.py
index 477c6a4f39a8..7a2c69bad0e6 100644
--- a/numpy/lib/_function_base_impl.py
+++ b/numpy/lib/_function_base_impl.py
@@ -5198,7 +5198,7 @@ def delete(arr, obj, axis=None):
----------
arr : array_like
Input array.
- obj : slice, int or array of ints
+ obj : slice, int, array-like of ints or bools
Indicate indices of sub-arrays to remove along the specified axis.
.. versionchanged:: 1.19.0
| DOC: types for numpy.delete's obj argument don't cover all possibilities
### Issue with current documentation:
The docstring for `numpy.delete` specifues the type of the `obj` parameter as:
> **obj : _slice, int or array of ints_**
It seem it can also be anything that can be cast as an array of ints, including e.g. an iterator like `range(0,4,2)` or list of ints like `[0,2,3]`. I confirmed this by looking in the source and I think this line is where anything that isn't a slice, int or bool is cast as a `numpy.array`:
https://github.com/numpy/numpy/blob/83bf22579b6eaa8113f6cff67b10fd9524965f23/numpy/lib/_function_base_impl.py#L5317
This type specification also doesn't mention that `obj` can be an array of booleans.
### Idea or request for content:
I'd open a PR but am not sure the best wording. I'm looking for something like
> **obj : _slice, int, array of bools or anything that can be cast as an array of ints_**
but that seems a bit verbose. Perhaps there's some NumPy jargon that's more succinct?
| 1,729,307,703,000 | null | Feature Request | [
"numpy/lib/_function_base_impl.py:delete"
] | [] |
||
vllm-project/vllm | vllm-project__vllm-11275 | 60508ffda91c22e4cde3b18f149d222211db8886 | diff --git a/vllm/executor/ray_gpu_executor.py b/vllm/executor/ray_gpu_executor.py
index 4bf5cbbd18ffe..e2c549cbd5331 100644
--- a/vllm/executor/ray_gpu_executor.py
+++ b/vllm/executor/ray_gpu_executor.py
@@ -123,6 +123,7 @@ def _init_workers_ray(self, placement_group: "PlacementGroup",
# Create the workers.
driver_ip = get_ip()
+ workers = []
for bundle_id, bundle in enumerate(placement_group.bundle_specs):
if not bundle.get("GPU", 0):
continue
@@ -138,20 +139,30 @@ def _init_workers_ray(self, placement_group: "PlacementGroup",
scheduling_strategy=scheduling_strategy,
**ray_remote_kwargs,
)(RayWorkerWrapper).remote(vllm_config=self.vllm_config)
+ workers.append(worker)
- if self.use_ray_spmd_worker:
- self.workers.append(worker)
- else:
- worker_ip = ray.get(worker.get_node_ip.remote())
- if worker_ip == driver_ip and self.driver_dummy_worker is None:
+ worker_ip_refs = [
+ worker.get_node_ip.remote() # type: ignore[attr-defined]
+ for worker in workers
+ ]
+ worker_ips = ray.get(worker_ip_refs)
+
+ if not self.use_ray_spmd_worker:
+ for i in range(len(workers)):
+ worker = workers[i]
+ worker_ip = worker_ips[i]
+ if self.driver_dummy_worker is None and worker_ip == driver_ip:
# If the worker is on the same node as the driver, we use it
# as the resource holder for the driver process.
self.driver_dummy_worker = worker
self.driver_worker = RayWorkerWrapper(
vllm_config=self.vllm_config)
- else:
- # Else, added to the list of workers.
- self.workers.append(worker)
+ workers.pop(i)
+ worker_ips.pop(i)
+ self.workers = workers
+ break
+ else:
+ self.workers = workers
logger.debug("workers: %s", self.workers)
logger.debug("driver_dummy_worker: %s", self.driver_dummy_worker)
@@ -161,14 +172,12 @@ def _init_workers_ray(self, placement_group: "PlacementGroup",
"adjusting the Ray placement group or running the driver on a "
"GPU node.")
- worker_ips = [
- ray.get(worker.get_node_ip.remote()) # type: ignore[attr-defined]
- for worker in self.workers
- ]
ip_counts: Dict[str, int] = {}
for ip in worker_ips:
ip_counts[ip] = ip_counts.get(ip, 0) + 1
+ worker_to_ip = dict(zip(self.workers, worker_ips))
+
def sort_by_driver_then_worker_ip(worker):
"""
Sort the workers based on 3 properties:
@@ -179,7 +188,7 @@ def sort_by_driver_then_worker_ip(worker):
3. Finally, if the work is on a node with smaller IP address, it
should be placed first.
"""
- ip = ray.get(worker.get_node_ip.remote())
+ ip = worker_to_ip[worker]
return (ip != driver_ip, ip_counts[ip], ip)
# After sorting, the workers on the same node will be
| [Bug]: LLM initialization time increases significantly with larger tensor parallel size and Ray
### Your current environment
vllm 0.5.2
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.31
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.134-008.7.kangaroo.al8.x86_64-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L20Z
GPU 1: NVIDIA L20Z
GPU 2: NVIDIA L20Z
GPU 3: NVIDIA L20Z
GPU 4: NVIDIA L20Z
GPU 5: NVIDIA L20Z
GPU 6: NVIDIA L20Z
GPU 7: NVIDIA L20Z
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 100
On-line CPU(s) list: 0-99
Thread(s) per core: 1
Core(s) per socket: 100
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Processor
Stepping: 8
CPU MHz: 2000.000
BogoMIPS: 4000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.7 MiB
L1i cache: 3.1 MiB
L2 cache: 200 MiB
L3 cache: 105 MiB
NUMA node0 CPU(s): 0-99
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd avx512vbmi umip pku waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.2
[pip3] onnx==1.13.1
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.3.1
[pip3] torch-tensorrt==1.4.0.dev0
[pip3] torchaudio==2.3.1
[pip3] torchtext==0.13.0a0+fae8e8c
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] Could not collect
```
</details>
### Model Input Dumps
just test the vllm init time
### 🐛 Describe the bug
### Issue Description
We observed significant and unexpected increases in VLLM initialization time when scaling tensor parallelism (TP), especially with Ray enabled.
### Observed Behavior
- TP=1: ~7 seconds initialization time
- TP=4: ~14 seconds initialization time
- TP=4 with Ray: ~24 seconds initialization time
### Expected Behavior
Initialization time should remain relatively constant or have minimal increase when scaling tensor parallelism and use ray.
### Environment
- VLLM version: 0.5.2
- Model: Qwen2-7B
- GPU: NVIDIA L20Z
- Number of GPUs: 8
### Additional Context
The initialization time increase appears disproportionate to the tensor parallel size, suggesting a potential bottleneck in the initialization process, particularly when Ray is involved.
### Reproducible Steps
1. Run VLLM with TP=1
2. Run VLLM with TP=4
3. Run VLLM with TP=4 and Ray enabled
4. Compare initialization times
### vllm start time
```python
def run_vllm(
requests: List[Tuple[str, int, int]],
model: str,
tokenizer: str,
quantization: Optional[str],
tensor_parallel_size: int,
seed: int,
n: int,
use_beam_search: bool,
trust_remote_code: bool,
dtype: str,
max_model_len: Optional[int],
enforce_eager: bool,
kv_cache_dtype: str,
quantization_param_path: Optional[str],
device: str,
enable_prefix_caching: bool,
enable_chunked_prefill: bool,
max_num_batched_tokens: int,
distributed_executor_backend: Optional[str],
gpu_memory_utilization: float = 0.9,
num_scheduler_steps: int = 1,
use_v2_block_manager: bool = False,
download_dir: Optional[str] = None,
load_format: str = EngineArgs.load_format,
disable_async_output_proc: bool = False,
) -> float:
# 导入必要的库
from vllm import LLM, SamplingParams
print(f"Start initializing LLM at {time.strftime('%Y-%m-%d %H:%M:%S')}")
start = time.perf_counter()
llm = LLM(
model=model,
tokenizer=tokenizer,
quantization=quantization,
tensor_parallel_size=tensor_parallel_size,
seed=seed,
trust_remote_code=trust_remote_code,
dtype=dtype,
max_model_len=max_model_len,
gpu_memory_utilization=gpu_memory_utilization,
enforce_eager=enforce_eager,
kv_cache_dtype=kv_cache_dtype,
quantization_param_path=quantization_param_path,
device=device,
enable_prefix_caching=enable_prefix_caching,
download_dir=download_dir,
enable_chunked_prefill=enable_chunked_prefill,
max_num_batched_tokens=max_num_batched_tokens,
distributed_executor_backend=distributed_executor_backend,
load_format=load_format,
# num_scheduler_steps=num_scheduler_steps,
# use_v2_block_manager=use_v2_block_manager,
# disable_async_output_proc=disable_async_output_proc,
)
end = time.perf_counter()
print(f"Finish initializing LLM at {time.strftime('%Y-%m-%d %H:%M:%S')}")
print(f"vllm init time: {end - start}")
```
### vllm ray start time
```python
def run_ray_vllm(
requests: List[Tuple[str, int, int]],
model: str,
tokenizer: str,
quantization: Optional[str],
tensor_parallel_size: int,
seed: int,
n: int,
use_beam_search: bool,
trust_remote_code: bool,
dtype: str,
max_model_len: Optional[int],
enforce_eager: bool,
kv_cache_dtype: str,
quantization_param_path: Optional[str],
device: str,
enable_prefix_caching: bool,
enable_chunked_prefill: bool,
max_num_batched_tokens: int,
distributed_executor_backend: Optional[str],
gpu_memory_utilization: float = 0.9,
num_scheduler_steps: int = 1,
use_v2_block_manager: bool = False,
download_dir: Optional[str] = None,
load_format: str = EngineArgs.load_format,
disable_async_output_proc: bool = False,
) -> float:
# 导入必要的库
from vllm import LLM, SamplingParams
import ray
@ray.remote
class LLMWorker:
def __init__(self, model, tokenizer, quantization, tensor_parallel_size, seed, trust_remote_code, dtype, max_model_len, gpu_memory_utilization, enforce_eager, kv_cache_dtype, quantization_param_path, device, enable_prefix_caching, download_dir, enable_chunked_prefill, max_num_batched_tokens, distributed_executor_backend, load_format, num_scheduler_steps, use_v2_block_manager, disable_async_output_proc):
from vllm import LLM
start = time.perf_counter()
self.llm = LLM(
model=model,
tokenizer=tokenizer,
quantization=quantization,
tensor_parallel_size=tensor_parallel_size,
seed=seed,
trust_remote_code=trust_remote_code,
dtype=dtype,
max_model_len=max_model_len,
gpu_memory_utilization=gpu_memory_utilization,
enforce_eager=enforce_eager,
kv_cache_dtype=kv_cache_dtype,
quantization_param_path=quantization_param_path,
device=device,
enable_prefix_caching=enable_prefix_caching,
download_dir=download_dir,
enable_chunked_prefill=enable_chunked_prefill,
max_num_batched_tokens=max_num_batched_tokens,
distributed_executor_backend=distributed_executor_backend,
load_format=load_format,
# num_scheduler_steps=num_scheduler_steps,
# use_v2_block_manager=use_v2_block_manager,
# disable_async_output_proc=disable_async_output_proc,
)
end = time.perf_counter()
print(f"Finish initializing LLM at {time.strftime('%Y-%m-%d %H:%M:%S')}")
print(f"vllm init time: {end - start}")
def generate(self, prompts, sampling_params):
return self.llm.generate(prompts, sampling_params, use_tqdm=True)
# Create LLM worker
worker = LLMWorker.remote(
model=model,
tokenizer=tokenizer,
quantization=quantization,
tensor_parallel_size=tensor_parallel_size,
seed=seed,
trust_remote_code=trust_remote_code,
dtype=dtype,
max_model_len=max_model_len,
gpu_memory_utilization=gpu_memory_utilization,
enforce_eager=enforce_eager,
kv_cache_dtype=kv_cache_dtype,
quantization_param_path=quantization_param_path,
device=device,
enable_prefix_caching=enable_prefix_caching,
download_dir=download_dir,
enable_chunked_prefill=enable_chunked_prefill,
max_num_batched_tokens=max_num_batched_tokens,
distributed_executor_backend=distributed_executor_backend,
load_format=load_format,
num_scheduler_steps=num_scheduler_steps,
use_v2_block_manager=use_v2_block_manager,
disable_async_output_proc=disable_async_output_proc,
)
# Add the requests to the engine.
prompts: List[str] = []
sampling_params: List[SamplingParams] = []
for prompt, _, output_len in requests:
prompts.append(prompt)
sampling_params.append(
SamplingParams(
n=n,
temperature=0.0 if use_beam_search else 1.0,
top_p=1.0,
use_beam_search=use_beam_search,
ignore_eos=True,
max_tokens=output_len,
)
)
start = time.perf_counter()
ray.get(worker.generate.remote(prompts, sampling_params))
end = time.perf_counter()
return end - start
```
| someone correct me if im wrong but the way the workers are initialized are done sequentially on the main process. which can be seen in the function I linked below
https://github.com/vllm-project/vllm/blob/bbd3e86926f15e59e4c62246b4b3185e71fe7ff2/vllm/executor/ray_gpu_executor.py#L109
ray add additional overhead because you have to send the whole worker configs through Ray which is a slower process
> someone correct me if im wrong but the way the workers are initialized are done sequentially on the main process. which can be seen in the function I linked below
>
> https://github.com/vllm-project/vllm/blob/bbd3e86926f15e59e4c62246b4b3185e71fe7ff2/vllm/executor/ray_gpu_executor.py#L109
>
> ray add additional overhead because you have to send the whole worker configs through Ray which is a slower process
Thank you for your answer! However, I still have some concerns about the initialization overhead:
For a 7B model:
- TP=1: 7s initialization time (baseline)
- TP=4: 14s initialization time (+7s overhead)
- TP=4 with Ray: 24s initialization time (+10s additional overhead)
The overhead seems disproportionately large considering:
1. The baseline initialization is only 7 seconds
2. Moving from TP=1 to TP=4 doubles the initialization time
3. Adding Ray introduces an additional 10s overhead, which is even larger than the TP scaling overhead
Is this level of overhead expected? It seems excessive for a 7B model, especially since:
- The TP=1 to TP=4 transition adds 100% overhead
- Ray integration adds another ~71% overhead on top of TP=4
Could there be potential optimization opportunities to reduce these initialization costs?
I don't find that overhead too strange, and there definitely is room for optimizations (parallelizing the process) but engine startup time is not really an important metric that people worry about. (model reloading would probably be the solution more people are interested in that is currently not implemented?) is there a reason you're looking for faster initialization?
> I don't find that overhead too strange, and there definitely is room for optimizations (parallelizing the process) but engine startup time is not really an important metric that people worry about. (model reloading would probably be the solution more people are interested in that is currently not implemented?) is there a reason you're looking for faster initialization?
Great thanks for you relpy!
we want to improve the startup speed, IMHO, 34s is also too long to wait, especially when we are developing new features and what to run some tests to verify it.
I did some benchmarking on Ray executor initialization:
Code:
https://github.com/vllm-project/vllm/pull/11272
Command:
```
python3 benchmarks/benchmark_latency.py --model meta-llama/Llama-3.1-8B-Instruct --tensor-parallel-size 4 --num-iters-warmup 1 --num-iters 2 --batch-size 8 --input-len 128 --output-len 256 --max-model-len 2048 --no-enable-prefix-caching --distributed-executor-backend ray
```
TP = 2
INFO 12-17 22:57:58 llm_engine.py:507] time for initialize_ray_cluster: 2.265391
INFO 12-17 22:58:05 ray_gpu_executor.py:160] time for creating workers: 6.369353
INFO 12-17 22:58:05 ray_gpu_executor.py:252] time for _run_workers update_environment_variables: 0.002733
INFO 12-17 22:58:08 ray_gpu_executor.py:283] time for _run_workers init_device: 1.320933
INFO 12-17 22:58:12 ray_gpu_executor.py:290] time for _run_workers load_model: 3.812319
INFO 12-17 22:58:12 ray_gpu_executor.py:67] time for _init_workers_ray: 12.652736
TP = 4
INFO 12-17 23:00:31 llm_engine.py:507] time for initialize_ray_cluster: 2.435579
INFO 12-17 23:00:44 ray_gpu_executor.py:160] time for creating workers: 12.788581
INFO 12-17 23:00:44 ray_gpu_executor.py:252] time for _run_workers update_environment_variables: 0.003613
INFO 12-17 23:00:46 ray_gpu_executor.py:278] time for _run_workers init_worker: 1.149088
INFO 12-17 23:00:47 ray_gpu_executor.py:283] time for _run_workers init_device: 1.840026
INFO 12-17 23:00:50 ray_gpu_executor.py:290] time for _run_workers load_model: 2.787669
INFO 12-17 23:00:50 ray_gpu_executor.py:67] time for _init_workers_ray: 18.580796
The time to create workers increase proportionally to the number of workers, others are quite constant. I will look into potential ways to optimize.
| 1,734,483,554,000 | null | Performance Issue | [
"vllm/executor/ray_gpu_executor.py:RayGPUExecutor._init_workers_ray"
] | [] |
|
vllm-project/vllm | vllm-project__vllm-9617 | e7116c017c86cb547f4d1888edaf13a9be2a4562 | diff --git a/vllm/engine/llm_engine.py b/vllm/engine/llm_engine.py
index 3a29e6a9ae094..51a0d10db8f38 100644
--- a/vllm/engine/llm_engine.py
+++ b/vllm/engine/llm_engine.py
@@ -1612,7 +1612,7 @@ def _get_stats(self,
# KV Cache Usage in %
num_total_gpu = self.cache_config.num_gpu_blocks
gpu_cache_usage_sys = 0.
- if num_total_gpu is not None:
+ if num_total_gpu: # Guard against both None and 0
num_free_gpu = sum(
scheduler.block_manager.get_num_free_gpu_blocks()
for scheduler in self.scheduler)
@@ -1620,7 +1620,7 @@ def _get_stats(self,
num_total_cpu = self.cache_config.num_cpu_blocks
cpu_cache_usage_sys = 0.
- if num_total_cpu is not None and num_total_cpu > 0:
+ if num_total_cpu: # Guard against both None and 0
num_free_cpu = sum(
scheduler.block_manager.get_num_free_cpu_blocks()
for scheduler in self.scheduler)
| [Bug]: Support Falcon Mamba
### Your current environment
Does VLLM support Falcon Mamba models? if not, when it will be supported
### 🐛 Describe the bug
Does VLLM support Falcon Mamba models? if not, when it will be supported
| cc @tlrmchlsmth
Unsubscribe
On Wed, 14 Aug, 2024, 1:37 am Robert Shaw, ***@***.***> wrote:
> cc @tlrmchlsmth <https://github.com/tlrmchlsmth>
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/vllm-project/vllm/issues/7478#issuecomment-2287037438>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BCU643RS6ZZAY5MXB4JRVXLZRJRR3AVCNFSM6AAAAABMO6JHLOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBXGAZTONBTHA>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***>
>
Hey @hahmad2008,
No, vLLM doesn't support Falcon Mamba yet. I have a work-in-progress PR (https://github.com/vllm-project/vllm/pull/6484) to add support for Mamba. I'll look into supporting FalconMamba as well.
Thank @tlrmchlsmth. Do you have any idea when this PR will be merged?
@tlrmchlsmth Do you have any idea when this PR will be merged?
Hi @hahmad2008, I’ve been prioritizing that PR over the last couple of days and I think will land it later this week. Do note that I’m not planning to add FalconMamba to #6484, but if it’s similar enough to Mamba or Mamba2, there will be a fast follow PR for support
Thank you @tlrmchlsmth. I can only load and apply inference from it using transformers version `4.45.0.dev0` which is still not released. I installed it using:
```
pip install -U git+https://github.com/huggingface/transformers.git
```
So do you think your PR handle this?
@hahmad2008 sorry, haven't gotten a chance to look at FalconMamba yet -- If transformers 4.45 is needed, then I'll likely I'll wait for that release.
@tlrmchlsmth Thanks! seems it will be released next week.
https://github.com/huggingface/transformers/issues/33236#issuecomment-2324529754
i think now it is released. @tlrmchlsmth please update on this.
I'll be landing Mamba soon, possibly today. Sorry for the delay on that one.
FalconMamba looks to be quite close to Mamba, so should be fairly easy to support. I'll try to get to it when I have some spare cycles but can't promise that I'll prioritize it. If somebody else wants to work on FalconMamba before I get to it, I would be happy to shepherd it
Hey Guys,
i managed to add support for FalconMamba inside vllm ,
here is the link: #9325
Please feel free to check that out.
Thank you. I will try deploying it.
I am getting error when i try to deploy. Below are the steps followed:
- I followed https://docs.vllm.ai/en/stable/getting_started/installation.html#full-build-with-compilation to build fully
- I run the server using `vllm serve tiiuae/falcon-mamba-7b-instruct --dtype auto` command. I tried both with `-tp 4` and without `-tp 4`
**Hardware:**
AWS machine: g5.x24large (4xA10g)
I am getting below error:
```
INFO: Uvicorn running on socket ('0.0.0.0', 8000) (Press CTRL+C to quit)
ERROR 10-16 12:19:35 engine.py:160] ZeroDivisionError('division by zero')
ERROR 10-16 12:19:35 engine.py:160] Traceback (most recent call last):
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 158, in start
ERROR 10-16 12:19:35 engine.py:160] self.run_engine_loop()
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 214, in run_engine_loop
ERROR 10-16 12:19:35 engine.py:160] self.engine.do_log_stats()
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/llm_engine.py", line 1543, in do_log_stats
ERROR 10-16 12:19:35 engine.py:160] stats = self._get_stats(scheduler_outputs, model_output,
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/llm_engine.py", line 1583, in _get_stats
ERROR 10-16 12:19:35 engine.py:160] gpu_cache_usage_sys = 1.0 - (num_free_gpu / num_total_gpu)
ERROR 10-16 12:19:35 engine.py:160] ZeroDivisionError: division by zero
INFO 10-16 12:19:35 multiproc_worker_utils.py:134] Terminating local vLLM worker processes
(VllmWorkerProcess pid=8683) INFO 10-16 12:19:35 multiproc_worker_utils.py:242] Worker exiting
(VllmWorkerProcess pid=8684) INFO 10-16 12:19:35 multiproc_worker_utils.py:242] Worker exiting
(VllmWorkerProcess pid=8685) INFO 10-16 12:19:35 multiproc_worker_utils.py:242] Worker exiting
ERROR 10-16 12:19:45 client.py:250] TimeoutError('No heartbeat received from MQLLMEngine')
ERROR 10-16 12:19:45 client.py:250] NoneType: None
```
@dhiaEddineRhaiem What could be the issue?
Hello @hassanraha ,
I got the same error even with mamba and not only with FalconMamba. So it is independant on the PR implementation.
As a workaround , i hardcoded it to the total number of gpus i have locally in my ec2 instance.
@dhiaEddineRhaiem thanks. I changed `num_total_gpu` to number of GPU available. It worked.
@tlrmchlsmth is the below issue also fixed?
```
INFO: Uvicorn running on socket ('0.0.0.0', 8000) (Press CTRL+C to quit)
ERROR 10-16 12:19:35 engine.py:160] ZeroDivisionError('division by zero')
ERROR 10-16 12:19:35 engine.py:160] Traceback (most recent call last):
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 158, in start
ERROR 10-16 12:19:35 engine.py:160] self.run_engine_loop()
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 214, in run_engine_loop
ERROR 10-16 12:19:35 engine.py:160] self.engine.do_log_stats()
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/llm_engine.py", line 1543, in do_log_stats
ERROR 10-16 12:19:35 engine.py:160] stats = self._get_stats(scheduler_outputs, model_output,
ERROR 10-16 12:19:35 engine.py:160] File "/workspace/vllm/vllm/engine/llm_engine.py", line 1583, in _get_stats
ERROR 10-16 12:19:35 engine.py:160] gpu_cache_usage_sys = 1.0 - (num_free_gpu / num_total_gpu)
ERROR 10-16 12:19:35 engine.py:160] ZeroDivisionError: division by zero
INFO 10-16 12:19:35 multiproc_worker_utils.py:134] Terminating local vLLM worker processes
(VllmWorkerProcess pid=8683) INFO 10-16 12:19:35 multiproc_worker_utils.py:242] Worker exiting
(VllmWorkerProcess pid=8684) INFO 10-16 12:19:35 multiproc_worker_utils.py:242] Worker exiting
(VllmWorkerProcess pid=8685) INFO 10-16 12:19:35 multiproc_worker_utils.py:242] Worker exiting
ERROR 10-16 12:19:45 client.py:250] TimeoutError('No heartbeat received from MQLLMEngine')
ERROR 10-16 12:19:45 client.py:250] NoneType: None
``` | 1,729,693,073,000 | null | Feature Request | [
"vllm/engine/llm_engine.py:LLMEngine._get_stats"
] | [] |
|
joke2k/faker | joke2k__faker-2113 | b605d2118b2e5879795678d4e234bd3552fee0a9 | diff --git a/faker/factory.py b/faker/factory.py
index 074f1a05e3..6849570578 100644
--- a/faker/factory.py
+++ b/faker/factory.py
@@ -65,7 +65,7 @@ def create(
return faker
@classmethod
- @functools.cache
+ @functools.lru_cache(maxsize=None)
def _find_provider_class(
cls,
provider_path: str,
| Cache Factory._find_provider_class module look-ups, make Faker construction 20✕
### What does this change
This caches the look-ups of the provider class based on the provider name and locale (and specific subclass of Factory, if any), making construction of multiple Faker instances ~20✕ faster.
This shouldn't change external behaviour unless someone is doing things that seem surprising:
- using the same `provider_path` to refer to different modules, via some sort of dynamic module magic
- a provider that is highly dynamic somehow, e.g. its `default_locale` attribute changes
### What was wrong
Doing the provider class look-up can be quite seemingly because of the `list_module` traversals, resulting in this appearing very high in the profiles of some test suites in my work repo (which create many independent faker instances, separately seeded).
For instance, running profiling in IPython with Faker v30.1.0 via:
```python
%prun -l 10 -s cumtime [faker.Faker() for _ in range(100)]
```
Takes 1.86 seconds and has this as the top 10 (cumulatively) slowest calls:
```
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 1.862 1.862 {built-in method builtins.exec}
1 0.000 0.000 1.862 1.862 <string>:1(<module>)
1 0.000 0.000 1.862 1.862 <string>:1(<listcomp>)
100 0.001 0.000 1.861 0.019 proxy.py:31(__init__)
100 0.005 0.000 1.860 0.019 factory.py:23(create)
2500 0.006 0.000 1.726 0.001 factory.py:66(_find_provider_class)
1900 0.002 0.000 1.650 0.001 loading.py:31(list_module)
1900 0.013 0.000 1.616 0.001 loading.py:38(<listcomp>)
61700 0.032 0.000 1.603 0.000 pkgutil.py:110(iter_modules)
61700 0.106 0.000 1.551 0.000 pkgutil.py:144(_iter_file_finder_modules)
```
### How this fixes it
By putting `@functools.cache` on `Factory._find_provider_class`, that function only runs once for each combination of provider_path, locale and cls (Factory subclass). This potentially increases memory usage slightly, but in all but extreme cases, each of those args should only be used with a limited number of values.
Benchmarks:
- Running `%timeit faker.Faker()` in IPython:
- Before: `12.2 ms ± 355 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)`
- After: `555 µs ± 32.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)`
- Faker's test suite: running Faker's own test suite (specifically the number reported in pytest's footer after running the 'main' test suite, not tests/pytest/session_overides, and not including any of the other commands tox runs) show approximately this behaviour: ~90s -> ~60s.
- With a similar change hacked into my real work repo, time to run a particular test suite that creates a lot of Fakers goes from ~35s -> ~15s.
(NB. the second two "macro" benchmarks are very noisy.)
Running the same profiling command now takes 0.135s and shows these top 10 calls:
```
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.135 0.135 {built-in method builtins.exec}
1 0.000 0.000 0.135 0.135 <string>:1(<module>)
1 0.000 0.000 0.135 0.135 <string>:1(<listcomp>)
100 0.000 0.000 0.135 0.001 proxy.py:31(__init__)
100 0.002 0.000 0.134 0.001 factory.py:24(create)
2500 0.052 0.000 0.131 0.000 generator.py:32(add_provider)
2500 0.032 0.000 0.032 0.000 {built-in method builtins.dir}
176400 0.016 0.000 0.016 0.000 {method 'startswith' of 'str' objects}
80400 0.009 0.000 0.016 0.000 generator.py:100(set_formatter)
98500 0.011 0.000 0.011 0.000 {built-in method builtins.getattr}
```
### Checklist
- [x] I have read the documentation about [CONTRIBUTING](https://github.com/joke2k/faker/blob/master/CONTRIBUTING.rst)
- [x] I have run `make lint`
| Thank you for merging so quickly! | 1,728,113,346,000 | null | Performance Issue | [
"faker/factory.py:Factory"
] | [
"faker/factory.py:Factory"
] |
|
mlflow/mlflow | mlflow__mlflow-13390 | 49e038235f64cee0d6985293b9e5a24d2718abab | diff --git a/mlflow/openai/_openai_autolog.py b/mlflow/openai/_openai_autolog.py
index d67da788da443..5ddbf87dc4379 100644
--- a/mlflow/openai/_openai_autolog.py
+++ b/mlflow/openai/_openai_autolog.py
@@ -159,7 +159,6 @@ def _stream_output_logging_hook(stream: Iterator) -> Iterator:
yield chunk
try:
- chunk_dicts = []
chunk_dicts = [chunk.to_dict() for chunk in chunks]
if config.log_traces and request_id:
mlflow_client.end_trace(
| Remove useless `chunk_dicts`
### Summary
```diff
diff --git a/mlflow/openai/_openai_autolog.py b/mlflow/openai/_openai_autolog.py
index 149e92793..45e486808 100644
--- a/mlflow/openai/_openai_autolog.py
+++ b/mlflow/openai/_openai_autolog.py
@@ -158,7 +158,6 @@ def patched_call(original, self, *args, **kwargs):
yield chunk
try:
- chunk_dicts = []
chunk_dicts = [chunk.to_dict() for chunk in chunks]
if config.log_traces and request_id:
mlflow_client.end_trace(
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| @harupy ill work on this issue | 1,728,658,506,000 | null | Performance Issue | [
"mlflow/openai/_openai_autolog.py:patched_call"
] | [] |
|
huggingface/transformers | huggingface__transformers-34507 | dadb286f061f156d01b80e12594321e890b53088 | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 1603a4ec215557..80f8a60a34b622 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1671,21 +1671,21 @@ def num_examples(self, dataloader: DataLoader) -> int:
except (NameError, AttributeError, TypeError): # no dataset or length, estimate by length of dataloader
return len(dataloader) * self.args.per_device_train_batch_size
- def num_tokens(self, train_dl: DataLoader, max_steps: Optional[int] = None) -> int:
+ @staticmethod
+ def num_tokens(train_dl: DataLoader, max_steps: Optional[int] = None) -> int:
"""
Helper to get number of tokens in a [`~torch.utils.data.DataLoader`] by enumerating dataloader.
"""
train_tokens = 0
try:
- for step, batch in enumerate(train_dl):
+ for batch in train_dl:
tokens = batch["input_ids"].numel()
if max_steps is not None:
return tokens * max_steps
train_tokens += tokens
- return train_tokens
except KeyError:
logger.warning("Cannot get num_tokens from dataloader")
- return train_tokens
+ return train_tokens
def _hp_search_setup(self, trial: Union["optuna.Trial", Dict[str, Any]]):
"""HP search setup code"""
@@ -2439,7 +2439,6 @@ def _inner_training_loop(
epoch_iterator = iter(epoch_dataloader)
# We chunkify the epoch iterator into gradient accumulation steps `n` batches
remainder = num_examples % args.gradient_accumulation_steps
- num_items_in_batch = None
if remainder == 0:
remainder = args.gradient_accumulation_steps
update_step = -1
@@ -2562,7 +2561,9 @@ def _inner_training_loop(
self.state.global_step += 1
self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
self.control = self.callback_handler.on_step_end(args, self.state, self.control)
- self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
+ self._maybe_log_save_evaluate(
+ tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time
+ )
else:
self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
@@ -2587,7 +2588,7 @@ def _inner_training_loop(
self.control.should_training_stop = True
self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
- self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
+ self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time)
if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
if is_torch_xla_available():
@@ -2992,7 +2993,7 @@ def _evaluate(self, trial, ignore_keys_for_eval, skip_scheduler=False):
) from exc
return metrics
- def _maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval):
+ def _maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time):
if self.control.should_log and self.state.global_step > self._globalstep_last_logged:
if is_torch_xla_available():
xm.mark_step()
@@ -3014,7 +3015,7 @@ def _maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, igno
self._globalstep_last_logged = self.state.global_step
self.store_flos()
- self.log(logs)
+ self.log(logs, start_time)
metrics = None
if self.control.should_evaluate:
@@ -3512,7 +3513,7 @@ def hyperparameter_search(
self.hp_search_backend = None
return best_run
- def log(self, logs: Dict[str, float]) -> None:
+ def log(self, logs: Dict[str, float], start_time: Optional[float] = None) -> None:
"""
Log `logs` on the various objects watching training.
@@ -3521,11 +3522,15 @@ def log(self, logs: Dict[str, float]) -> None:
Args:
logs (`Dict[str, float]`):
The values to log.
+ start_time (`Optional[float]`):
+ The start of training.
"""
if self.state.epoch is not None:
logs["epoch"] = self.state.epoch
if self.args.include_num_input_tokens_seen:
logs["num_input_tokens_seen"] = self.state.num_input_tokens_seen
+ if start_time is not None:
+ speed_metrics("train", start_time, num_tokens=self.state.num_input_tokens_seen)
output = {**logs, **{"step": self.state.global_step}}
self.state.log_history.append(output)
| Move Trainer's tokens per second metric into the inner training loop
### Feature request
Right now `include_tokens_per_second=True` in `Trainer` only reports the tokens per second metric at [the end of training](https://github.com/huggingface/transformers/blob/c1753436dbb8bcbcee183cdd6eba9f08a90d602a/src/transformers/trainer.py#L2610). It would be very useful to have this metric reported continuously inside the training loop so we can monitor it during training.
### Motivation
The current behavior is counter-intuitive, doesn't align with other convenient trainers (like torchtune), and it's undocumented, so I had to RTFC to figure out why the metric wasn't showing up.
### Your contribution
I probably don't have time to contribute it myself.
| cc @muellerzr @SunMarc | 1,730,291,622,000 | null | Feature Request | [
"src/transformers/trainer.py:Trainer.num_tokens",
"src/transformers/trainer.py:Trainer",
"src/transformers/trainer.py:Trainer._inner_training_loop",
"src/transformers/trainer.py:Trainer._maybe_log_save_evaluate",
"src/transformers/trainer.py:Trainer.log"
] | [] |
|
huggingface/transformers | huggingface__transformers-34279 | 93352e81f5019abaa52f7bdc2e3284779e864367 | diff --git a/src/transformers/integrations/integration_utils.py b/src/transformers/integrations/integration_utils.py
index 4f7cf3632fe549..a09116552c8e34 100755
--- a/src/transformers/integrations/integration_utils.py
+++ b/src/transformers/integrations/integration_utils.py
@@ -1218,6 +1218,8 @@ def setup(self, args, state, model):
and other parameters are ignored.
- **MLFLOW_FLATTEN_PARAMS** (`str`, *optional*, defaults to `False`):
Whether to flatten the parameters dictionary before logging.
+ - **MLFLOW_MAX_LOG_PARAMS** (`int`, *optional*):
+ Set the maximum number of parameters to log in the run.
"""
self._log_artifacts = os.getenv("HF_MLFLOW_LOG_ARTIFACTS", "FALSE").upper() in ENV_VARS_TRUE_VALUES
self._nested_run = os.getenv("MLFLOW_NESTED_RUN", "FALSE").upper() in ENV_VARS_TRUE_VALUES
@@ -1225,6 +1227,7 @@ def setup(self, args, state, model):
self._experiment_name = os.getenv("MLFLOW_EXPERIMENT_NAME", None)
self._flatten_params = os.getenv("MLFLOW_FLATTEN_PARAMS", "FALSE").upper() in ENV_VARS_TRUE_VALUES
self._run_id = os.getenv("MLFLOW_RUN_ID", None)
+ self._max_log_params = os.getenv("MLFLOW_MAX_LOG_PARAMS", None)
# "synchronous" flag is only available with mlflow version >= 2.8.0
# https://github.com/mlflow/mlflow/pull/9705
@@ -1273,6 +1276,13 @@ def setup(self, args, state, model):
del combined_dict[name]
# MLflow cannot log more than 100 values in one go, so we have to split it
combined_dict_items = list(combined_dict.items())
+ if self._max_log_params and self._max_log_params.isdigit():
+ max_log_params = int(self._max_log_params)
+ if max_log_params < len(combined_dict_items):
+ logger.debug(
+ f"Reducing the number of parameters to log from {len(combined_dict_items)} to {max_log_params}."
+ )
+ combined_dict_items = combined_dict_items[:max_log_params]
for i in range(0, len(combined_dict_items), self._MAX_PARAMS_TAGS_PER_BATCH):
if self._async_log:
self._ml_flow.log_params(
| Limit number of parametes logged with `MLflowCallback`
### Feature request
Add a new environment variable, such as `MLFLOW_MAX_LOG_PARAMS`, which can limit the number of parameters logged by the `MLflowCallback`.
### Motivation
When using mlflow in Azure ML, there is a limit of 200 parameters that can be logged in a single run, meaning that when attempting to run a training job, the callback needs to be disabled entirely, or the module needs to be "monkeypatched" to limit the number of params logged.
### Your contribution
I will submit a PR
| 1,729,505,760,000 | null | Feature Request | [
"src/transformers/integrations/integration_utils.py:MLflowCallback.setup"
] | [] |
||
huggingface/transformers | huggingface__transformers-34208 | 343c8cb86f2ab6a51e7363ee11f69afb1c9e839e | diff --git a/src/transformers/agents/tools.py b/src/transformers/agents/tools.py
index cfb1e4cf95ced9..a425ffc8f106b2 100644
--- a/src/transformers/agents/tools.py
+++ b/src/transformers/agents/tools.py
@@ -138,7 +138,7 @@ def validate_arguments(self):
"inputs": Dict,
"output_type": str,
}
- authorized_types = ["string", "integer", "number", "image", "audio", "any"]
+ authorized_types = ["string", "integer", "number", "image", "audio", "any", "boolean"]
for attr, expected_type in required_attributes.items():
attr_value = getattr(self, attr, None)
| Boolean as tool input
### Feature request
It would be great if `boolean` was authorized as input to a `Tool`
### Motivation
I am willing to use my own tools with transformers CodeAgent ; using the method `tool`
I have a proper function `func` with typing and doc-strings as required. One of the input of the function is a `bool`.
When I try to run `tool(func)` I get: `Exception: Input 'perte_de_salaire': type 'boolean' is not an authorized value, should be one of ['string', 'integer', 'number', 'image', 'audio', 'any']`.
The Exception is rather clear, but why wouldn't a type as basic as `boolean` not be allowed? Especially since any is authorized. This is clearly a limitation to using the library.
### Your contribution
I seems like a few lines of code to change in tools.py (https://github.com/huggingface/transformers/blob/main/src/transformers/agents/tools.py)
| cc @aymeric-roucher
Please assign this to me | 1,729,135,245,000 | null | Feature Request | [
"src/transformers/agents/tools.py:Tool.validate_arguments"
] | [] |
|
django/django | django__django-18654 | c334c1a8ff4579cdb1dd77cce8da747070ac9fc4 | diff --git a/django/urls/base.py b/django/urls/base.py
index 753779c75b46..bb40ba222436 100644
--- a/django/urls/base.py
+++ b/django/urls/base.py
@@ -127,8 +127,9 @@ def clear_script_prefix():
def set_urlconf(urlconf_name):
"""
- Set the URLconf for the current thread (overriding the default one in
- settings). If urlconf_name is None, revert back to the default.
+ Set the URLconf for the current thread or asyncio task (overriding the
+ default one in settings). If urlconf_name is None, revert back to the
+ default.
"""
if urlconf_name:
_urlconfs.value = urlconf_name
@@ -139,8 +140,8 @@ def set_urlconf(urlconf_name):
def get_urlconf(default=None):
"""
- Return the root URLconf to use for the current thread if it has been
- changed from the default one.
+ Return the root URLconf to use for the current thread or asyncio task if it
+ has been changed from the default one.
"""
return getattr(_urlconfs, "value", default)
| Clarify django.urls.set_urlconf scoping behaviour
Description
django.urls.set_urlconf docstring mentions setting the urlconf for the current thread. However, this is backed by asgiref.local.Local, which is supposed to provide scoping features related to asyncio tasks as well. This becomes relevant, for example, when doing multi-tenancy with more than one urlconf and trying to call django.urls.reverse in an ASGI application.
I have been trying to infer what is the expected behaviour in async Django code by following the current implementation, and I found that asgiref.local.Local behaviour has changed over time (see https://github.com/django/asgiref/issues/473).
I assume that using asgiref.local.Local instead of threading.local hints at an intention is to give set_urlconf/get_urlconf meaningful semantics for Channels consumers or ASGI applications.
Whether the intention is to isolate set_urlconf/get_urlconf across different asyncio tasks or to only support isolation between threads, I suppose it would be useful if their behaviour was documented also for the case of asyncio code, especially given they back django.urls.reverse.
| ["I'm struggling to follow what this is asking for - can you share an example of the behavior you're seeing? From what I can see, both async and sync requests handle the urlconf the same - it is the ROOT_URLCONF unless set by a middleware \u200bas documented.", 1728267595.0]
["Firstly, just for Django, set_urlconf is not public API so you shouldn't use it. Rather prefer the documented HttpRequest.urlconf attribute (\u200bdocs) to set per-tenant URLs in a middleware, if that's your strategy. Secondly, the \u200blinked asgiref issue (#473) is in progress, and should be resolved shortly. (Setting locals has been leaking out of asyncio tasks since the recent v3.8, which is a regression. You can pin to v3.7.2 pending the next release.) It's not a Django issue. If you need further advice please see TicketClosingReasons/UseSupportChannels.", 1728268240.0]
['From what I could see, "Set the URLconf for the current thread" in set_urlconf\'s docstring looks like a leftover from the pre-async times and could become "Set the URLconf for the current thread or asyncio task". I understand it\'s not a public API and indeed I only set request.urlconf in my code. We spotted the docstring while trying to double check the validity of what we were designing on the async case, and had some extra digging to do to reassure ourselves we were still doing the right thing, hence this issue. I\'m surprised the ticket got closed, but fair enough.', 1728269648.0]
['Yes, OK, just changing the docstring would be fine.', 1728270267.0]
['\u200bPR', 1728270548.0]
['In d876be79: Fixed #35807 -- Mentioned async case for internal get/set urlconf helpers.', 1728274635.0] | 1,728,288,525,000 | null | Feature Request | [
"django/urls/base.py:set_urlconf",
"django/urls/base.py:get_urlconf"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-9815 | 13e8fdecda91e27e40b15fa8a8f456ade773e6eb | diff --git a/src/diffusers/training_utils.py b/src/diffusers/training_utils.py
index d2bf3fe07185..2474ed5c2114 100644
--- a/src/diffusers/training_utils.py
+++ b/src/diffusers/training_utils.py
@@ -43,6 +43,9 @@ def set_seed(seed: int):
Args:
seed (`int`): The seed to set.
+
+ Returns:
+ `None`
"""
random.seed(seed)
np.random.seed(seed)
@@ -58,6 +61,17 @@ def compute_snr(noise_scheduler, timesteps):
"""
Computes SNR as per
https://github.com/TiankaiHang/Min-SNR-Diffusion-Training/blob/521b624bd70c67cee4bdf49225915f5945a872e3/guided_diffusion/gaussian_diffusion.py#L847-L849
+ for the given timesteps using the provided noise scheduler.
+
+ Args:
+ noise_scheduler (`NoiseScheduler`):
+ An object containing the noise schedule parameters, specifically `alphas_cumprod`, which is used to compute
+ the SNR values.
+ timesteps (`torch.Tensor`):
+ A tensor of timesteps for which the SNR is computed.
+
+ Returns:
+ `torch.Tensor`: A tensor containing the computed SNR values for each timestep.
"""
alphas_cumprod = noise_scheduler.alphas_cumprod
sqrt_alphas_cumprod = alphas_cumprod**0.5
| [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civilization:
r"""
Function that creates a simulation.
Args:
parameter_1 (`str` or `List[str]`):
Description of game level.
parameter_2 (`int`, *optional*):
Kardashev scale of civilization.
parameter_3 (`float`, defaults to `42.0`):
Difficulty scale.
Returns:
[`~simulations.objects.Civilization`]
A civilization simulation with provided initialization parameters.
"""
```
Some examples that don't follow the docstring convention are:
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L89): missing explanations
- [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L132): does not contain mixin-related documentation whereas as [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154) does
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/utils/import_utils.py#L672): function explanation after "Args", but should be before
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py#L14): same reason as above
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L518): incorrect indentation
There are also many places where docstrings are completely missing or inadequately explained. If you feel something needs an improvement, you can open a PR with your suggestions too! Additionally, type hints are not appropriate/correctly used at many occurrences and mismatch the accompanying docstrings - these could use an improvement too!
Please limit your PRs to changes to a single file in each PR. Changes must be only related to docstrings/type hints. Feel free to ping either @yiyixuxu, @stevhliu or me for reviews.
| Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue 🙂
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contribution guide (and philosophy, if you're interested)
I'll take up some of these.
I’m also interested in this work. Could you let me know about the current progress? @SubhasmitaSw @charchit7
Hey @yijun-lee, I've been caught up with some work, unfortunately, but I’ll work on this over the weekend or later today. If you want to get started on any of these tasks, feel free to go ahead and let us know, so we can pick up whatever is left.
Oh, actually, I think I’ll be starting this weekend as well. If we proceed separately, it would be good to inform each other through comments or other means. Have a good day :) @charchit7
Sure, @yijun-lee, that works!
you too :)
Hello there guys, I'd also like to contribute in this issue. I'm sorry I didn't really drop in a message here yet but I hope this PR helps push things forward! A g'day to all.
Feel free to take up as many files as you want (one file per PR however)! The ones mentioned in the issue description are just a few examples, but there are probably hundreds of files that could use improvements. Please keep them coming, thanks
Hello! I'm also following this issue with interest. I’ve submitted my first PR, so please let me know if there are any mistakes! Have a great day!
Hello! Thanks for holding interesting issue! I'm fully circled for this new work ! 🙆🏻♀️ 🙆🏻♀️
I also have opened my PR, please let me know if I missed something !
+ Q. which should I prioritize modern python docstring conventions or unity of that file (e.g. expression) ?
@a-r-r-o-w hi, i wank to work on it
Hello colleagues,
I'm also interested into this issues. I made a first PR related to this issue.
Since there are lots of docstrings to update, I'm also interested into list up missing docstring files if I have time :)
Thank you in advance for your time and guidance!
Hello everyone! 😊
I'm also interested in this issue and have made some updates to the docstrings in `src/diffusers/training_utils.py`. I would love to get your feedback on my changes!
I’m excited to contribute and be part of this discussion. Thank you in advance for your time and guidance 🤗
Hi I would love to be of help here.
I have made some additions to the docstrings in src/diffusers/training_utils.py.
Would love to get your feedback on the PR :)
| 1,730,319,460,000 | null | Feature Request | [
"src/diffusers/training_utils.py:set_seed",
"src/diffusers/training_utils.py:compute_snr"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-9606 | 92d2baf643b6198c2df08d9e908637ea235d84d1 | diff --git a/src/diffusers/training_utils.py b/src/diffusers/training_utils.py
index 57bd9074870c..11a4e1cc8069 100644
--- a/src/diffusers/training_utils.py
+++ b/src/diffusers/training_utils.py
@@ -36,8 +36,9 @@
def set_seed(seed: int):
"""
- Args:
Helper function for reproducible behavior to set the seed in `random`, `numpy`, `torch`.
+
+ Args:
seed (`int`): The seed to set.
"""
random.seed(seed)
@@ -194,6 +195,13 @@ def unet_lora_state_dict(unet: UNet2DConditionModel) -> Dict[str, torch.Tensor]:
def cast_training_params(model: Union[torch.nn.Module, List[torch.nn.Module]], dtype=torch.float32):
+ """
+ Casts the training parameters of the model to the specified data type.
+
+ Args:
+ model: The PyTorch model whose parameters will be cast.
+ dtype: The data type to which the model parameters will be cast.
+ """
if not isinstance(model, list):
model = [model]
for m in model:
@@ -225,7 +233,8 @@ def _set_state_dict_into_text_encoder(
def compute_density_for_timestep_sampling(
weighting_scheme: str, batch_size: int, logit_mean: float = None, logit_std: float = None, mode_scale: float = None
):
- """Compute the density for sampling the timesteps when doing SD3 training.
+ """
+ Compute the density for sampling the timesteps when doing SD3 training.
Courtesy: This was contributed by Rafie Walker in https://github.com/huggingface/diffusers/pull/8528.
@@ -244,7 +253,8 @@ def compute_density_for_timestep_sampling(
def compute_loss_weighting_for_sd3(weighting_scheme: str, sigmas=None):
- """Computes loss weighting scheme for SD3 training.
+ """
+ Computes loss weighting scheme for SD3 training.
Courtesy: This was contributed by Rafie Walker in https://github.com/huggingface/diffusers/pull/8528.
@@ -261,7 +271,9 @@ def compute_loss_weighting_for_sd3(weighting_scheme: str, sigmas=None):
def free_memory():
- """Runs garbage collection. Then clears the cache of the available accelerator."""
+ """
+ Runs garbage collection. Then clears the cache of the available accelerator.
+ """
gc.collect()
if torch.cuda.is_available():
@@ -494,7 +506,8 @@ def pin_memory(self) -> None:
self.shadow_params = [p.pin_memory() for p in self.shadow_params]
def to(self, device=None, dtype=None, non_blocking=False) -> None:
- r"""Move internal buffers of the ExponentialMovingAverage to `device`.
+ r"""
+ Move internal buffers of the ExponentialMovingAverage to `device`.
Args:
device: like `device` argument to `torch.Tensor.to`
@@ -528,23 +541,25 @@ def state_dict(self) -> dict:
def store(self, parameters: Iterable[torch.nn.Parameter]) -> None:
r"""
+ Saves the current parameters for restoring later.
+
Args:
- Save the current parameters for restoring later.
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
+ parameters: Iterable of `torch.nn.Parameter`. The parameters to be temporarily stored.
"""
self.temp_stored_params = [param.detach().cpu().clone() for param in parameters]
def restore(self, parameters: Iterable[torch.nn.Parameter]) -> None:
r"""
- Args:
- Restore the parameters stored with the `store` method. Useful to validate the model with EMA parameters without:
- affecting the original optimization process. Store the parameters before the `copy_to()` method. After
+ Restore the parameters stored with the `store` method. Useful to validate the model with EMA parameters
+ without: affecting the original optimization process. Store the parameters before the `copy_to()` method. After
validation (or model saving), use this to restore the former parameters.
+
+ Args:
parameters: Iterable of `torch.nn.Parameter`; the parameters to be
updated with the stored parameters. If `None`, the parameters with which this
`ExponentialMovingAverage` was initialized will be used.
"""
+
if self.temp_stored_params is None:
raise RuntimeError("This ExponentialMovingAverage has no `store()`ed weights " "to `restore()`")
if self.foreach:
@@ -560,9 +575,10 @@ def restore(self, parameters: Iterable[torch.nn.Parameter]) -> None:
def load_state_dict(self, state_dict: dict) -> None:
r"""
- Args:
Loads the ExponentialMovingAverage state. This method is used by accelerate during checkpointing to save the
ema state dict.
+
+ Args:
state_dict (dict): EMA state. Should be an object returned
from a call to :meth:`state_dict`.
"""
| [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civilization:
r"""
Function that creates a simulation.
Args:
parameter_1 (`str` or `List[str]`):
Description of game level.
parameter_2 (`int`, *optional*):
Kardashev scale of civilization.
parameter_3 (`float`, defaults to `42.0`):
Difficulty scale.
Returns:
[`~simulations.objects.Civilization`]
A civilization simulation with provided initialization parameters.
"""
```
Some examples that don't follow the docstring convention are:
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L89): missing explanations
- [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L132): does not contain mixin-related documentation whereas as [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154) does
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/utils/import_utils.py#L672): function explanation after "Args", but should be before
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py#L14): same reason as above
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L518): incorrect indentation
There are also many places where docstrings are completely missing or inadequately explained. If you feel something needs an improvement, you can open a PR with your suggestions too! Additionally, type hints are not appropriate/correctly used at many occurrences and mismatch the accompanying docstrings - these could use an improvement too!
Please limit your PRs to changes to a single file in each PR. Changes must be only related to docstrings/type hints. Feel free to ping either @yiyixuxu, @stevhliu or me for reviews.
| Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue 🙂
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contribution guide (and philosophy, if you're interested)
I'll take up some of these.
I’m also interested in this work. Could you let me know about the current progress? @SubhasmitaSw @charchit7
Hey @yijun-lee, I've been caught up with some work, unfortunately, but I’ll work on this over the weekend or later today. If you want to get started on any of these tasks, feel free to go ahead and let us know, so we can pick up whatever is left.
Oh, actually, I think I’ll be starting this weekend as well. If we proceed separately, it would be good to inform each other through comments or other means. Have a good day :) @charchit7
Sure, @yijun-lee, that works!
you too :)
Hello there guys, I'd also like to contribute in this issue. I'm sorry I didn't really drop in a message here yet but I hope this PR helps push things forward! A g'day to all.
Feel free to take up as many files as you want (one file per PR however)! The ones mentioned in the issue description are just a few examples, but there are probably hundreds of files that could use improvements. Please keep them coming, thanks
Hello! I'm also following this issue with interest. I’ve submitted my first PR, so please let me know if there are any mistakes! Have a great day!
Hello! Thanks for holding interesting issue! I'm fully circled for this new work ! 🙆🏻♀️ 🙆🏻♀️
I also have opened my PR, please let me know if I missed something !
+ Q. which should I prioritize modern python docstring conventions or unity of that file (e.g. expression) ?
@a-r-r-o-w hi, i wank to work on it
Hello colleagues,
I'm also interested into this issues. I made a first PR related to this issue.
Since there are lots of docstrings to update, I'm also interested into list up missing docstring files if I have time :)
Thank you in advance for your time and guidance! | 1,728,390,456,000 | null | Feature Request | [
"src/diffusers/training_utils.py:set_seed",
"src/diffusers/training_utils.py:cast_training_params",
"src/diffusers/training_utils.py:compute_density_for_timestep_sampling",
"src/diffusers/training_utils.py:compute_loss_weighting_for_sd3",
"src/diffusers/training_utils.py:free_memory",
"src/diffusers/training_utils.py:EMAModel.to",
"src/diffusers/training_utils.py:EMAModel.store",
"src/diffusers/training_utils.py:EMAModel.restore",
"src/diffusers/training_utils.py:EMAModel.load_state_dict"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-9583 | 99f608218caa069a2f16dcf9efab46959b15aec0 | diff --git a/src/diffusers/utils/import_utils.py b/src/diffusers/utils/import_utils.py
index 34cc5fcc8605..daecec4aa258 100644
--- a/src/diffusers/utils/import_utils.py
+++ b/src/diffusers/utils/import_utils.py
@@ -668,8 +668,9 @@ def __getattr__(cls, key):
# This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L319
def compare_versions(library_or_version: Union[str, Version], operation: str, requirement_version: str):
"""
- Args:
Compares a library version to some requirement using a given operation.
+
+ Args:
library_or_version (`str` or `packaging.version.Version`):
A library name or a version to check.
operation (`str`):
@@ -688,8 +689,9 @@ def compare_versions(library_or_version: Union[str, Version], operation: str, re
# This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L338
def is_torch_version(operation: str, version: str):
"""
- Args:
Compares the current PyTorch version to a given reference with an operation.
+
+ Args:
operation (`str`):
A string representation of an operator, such as `">"` or `"<="`
version (`str`):
@@ -700,8 +702,9 @@ def is_torch_version(operation: str, version: str):
def is_transformers_version(operation: str, version: str):
"""
- Args:
Compares the current Transformers version to a given reference with an operation.
+
+ Args:
operation (`str`):
A string representation of an operator, such as `">"` or `"<="`
version (`str`):
@@ -714,8 +717,9 @@ def is_transformers_version(operation: str, version: str):
def is_accelerate_version(operation: str, version: str):
"""
- Args:
Compares the current Accelerate version to a given reference with an operation.
+
+ Args:
operation (`str`):
A string representation of an operator, such as `">"` or `"<="`
version (`str`):
@@ -728,8 +732,9 @@ def is_accelerate_version(operation: str, version: str):
def is_peft_version(operation: str, version: str):
"""
- Args:
Compares the current PEFT version to a given reference with an operation.
+
+ Args:
operation (`str`):
A string representation of an operator, such as `">"` or `"<="`
version (`str`):
@@ -742,8 +747,9 @@ def is_peft_version(operation: str, version: str):
def is_k_diffusion_version(operation: str, version: str):
"""
- Args:
Compares the current k-diffusion version to a given reference with an operation.
+
+ Args:
operation (`str`):
A string representation of an operator, such as `">"` or `"<="`
version (`str`):
@@ -756,8 +762,9 @@ def is_k_diffusion_version(operation: str, version: str):
def get_objects_from_module(module):
"""
- Args:
Returns a dict of object names and values in a module, while skipping private/internal objects
+
+ Args:
module (ModuleType):
Module to extract the objects from.
@@ -775,7 +782,9 @@ def get_objects_from_module(module):
class OptionalDependencyNotAvailable(BaseException):
- """An error indicating that an optional dependency of Diffusers was not found in the environment."""
+ """
+ An error indicating that an optional dependency of Diffusers was not found in the environment.
+ """
class _LazyModule(ModuleType):
| [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civilization:
r"""
Function that creates a simulation.
Args:
parameter_1 (`str` or `List[str]`):
Description of game level.
parameter_2 (`int`, *optional*):
Kardashev scale of civilization.
parameter_3 (`float`, defaults to `42.0`):
Difficulty scale.
Returns:
[`~simulations.objects.Civilization`]
A civilization simulation with provided initialization parameters.
"""
```
Some examples that don't follow the docstring convention are:
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L89): missing explanations
- [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L132): does not contain mixin-related documentation whereas as [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154) does
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/utils/import_utils.py#L672): function explanation after "Args", but should be before
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py#L14): same reason as above
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L518): incorrect indentation
There are also many places where docstrings are completely missing or inadequately explained. If you feel something needs an improvement, you can open a PR with your suggestions too! Additionally, type hints are not appropriate/correctly used at many occurrences and mismatch the accompanying docstrings - these could use an improvement too!
Please limit your PRs to changes to a single file in each PR. Changes must be only related to docstrings/type hints. Feel free to ping either @yiyixuxu, @stevhliu or me for reviews.
| Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue 🙂
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contribution guide (and philosophy, if you're interested)
I'll take up some of these.
I’m also interested in this work. Could you let me know about the current progress? @SubhasmitaSw @charchit7
Hey @yijun-lee, I've been caught up with some work, unfortunately, but I’ll work on this over the weekend or later today. If you want to get started on any of these tasks, feel free to go ahead and let us know, so we can pick up whatever is left.
Oh, actually, I think I’ll be starting this weekend as well. If we proceed separately, it would be good to inform each other through comments or other means. Have a good day :) @charchit7
Sure, @yijun-lee, that works!
you too :)
Hello there guys, I'd also like to contribute in this issue. I'm sorry I didn't really drop in a message here yet but I hope this PR helps push things forward! A g'day to all.
Feel free to take up as many files as you want (one file per PR however)! The ones mentioned in the issue description are just a few examples, but there are probably hundreds of files that could use improvements. Please keep them coming, thanks | 1,728,062,817,000 | null | Feature Request | [
"src/diffusers/utils/import_utils.py:compare_versions",
"src/diffusers/utils/import_utils.py:is_torch_version",
"src/diffusers/utils/import_utils.py:is_transformers_version",
"src/diffusers/utils/import_utils.py:is_accelerate_version",
"src/diffusers/utils/import_utils.py:is_peft_version",
"src/diffusers/utils/import_utils.py:is_k_diffusion_version",
"src/diffusers/utils/import_utils.py:get_objects_from_module",
"src/diffusers/utils/import_utils.py:OptionalDependencyNotAvailable"
] | [] |
|
huggingface/diffusers | huggingface__diffusers-9579 | 0763a7edf4e9f2992f5ec8fb0c9dca8ab3e29f07 | diff --git a/src/diffusers/models/embeddings.py b/src/diffusers/models/embeddings.py
index 80775d477c0d..91451fa9aac2 100644
--- a/src/diffusers/models/embeddings.py
+++ b/src/diffusers/models/embeddings.py
@@ -86,12 +86,25 @@ def get_3d_sincos_pos_embed(
temporal_interpolation_scale: float = 1.0,
) -> np.ndarray:
r"""
+ Creates 3D sinusoidal positional embeddings.
+
Args:
embed_dim (`int`):
+ The embedding dimension of inputs. It must be divisible by 16.
spatial_size (`int` or `Tuple[int, int]`):
+ The spatial dimension of positional embeddings. If an integer is provided, the same size is applied to both
+ spatial dimensions (height and width).
temporal_size (`int`):
+ The temporal dimension of postional embeddings (number of frames).
spatial_interpolation_scale (`float`, defaults to 1.0):
+ Scale factor for spatial grid interpolation.
temporal_interpolation_scale (`float`, defaults to 1.0):
+ Scale factor for temporal grid interpolation.
+
+ Returns:
+ `np.ndarray`:
+ The 3D sinusoidal positional embeddings of shape `[temporal_size, spatial_size[0] * spatial_size[1],
+ embed_dim]`.
"""
if embed_dim % 4 != 0:
raise ValueError("`embed_dim` must be divisible by 4")
@@ -129,8 +142,24 @@ def get_2d_sincos_pos_embed(
embed_dim, grid_size, cls_token=False, extra_tokens=0, interpolation_scale=1.0, base_size=16
):
"""
- grid_size: int of the grid height and width return: pos_embed: [grid_size*grid_size, embed_dim] or
- [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
+ Creates 2D sinusoidal positional embeddings.
+
+ Args:
+ embed_dim (`int`):
+ The embedding dimension.
+ grid_size (`int`):
+ The size of the grid height and width.
+ cls_token (`bool`, defaults to `False`):
+ Whether or not to add a classification token.
+ extra_tokens (`int`, defaults to `0`):
+ The number of extra tokens to add.
+ interpolation_scale (`float`, defaults to `1.0`):
+ The scale of the interpolation.
+
+ Returns:
+ pos_embed (`np.ndarray`):
+ Shape is either `[grid_size * grid_size, embed_dim]` if not using cls_token, or `[1 + grid_size*grid_size,
+ embed_dim]` if using cls_token
"""
if isinstance(grid_size, int):
grid_size = (grid_size, grid_size)
@@ -148,6 +177,16 @@ def get_2d_sincos_pos_embed(
def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
+ r"""
+ This function generates 2D sinusoidal positional embeddings from a grid.
+
+ Args:
+ embed_dim (`int`): The embedding dimension.
+ grid (`np.ndarray`): Grid of positions with shape `(H * W,)`.
+
+ Returns:
+ `np.ndarray`: The 2D sinusoidal positional embeddings with shape `(H * W, embed_dim)`
+ """
if embed_dim % 2 != 0:
raise ValueError("embed_dim must be divisible by 2")
@@ -161,7 +200,14 @@ def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
"""
- embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D)
+ This function generates 1D positional embeddings from a grid.
+
+ Args:
+ embed_dim (`int`): The embedding dimension `D`
+ pos (`numpy.ndarray`): 1D tensor of positions with shape `(M,)`
+
+ Returns:
+ `numpy.ndarray`: Sinusoidal positional embeddings of shape `(M, D)`.
"""
if embed_dim % 2 != 0:
raise ValueError("embed_dim must be divisible by 2")
@@ -181,7 +227,22 @@ def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
class PatchEmbed(nn.Module):
- """2D Image to Patch Embedding with support for SD3 cropping."""
+ """
+ 2D Image to Patch Embedding with support for SD3 cropping.
+
+ Args:
+ height (`int`, defaults to `224`): The height of the image.
+ width (`int`, defaults to `224`): The width of the image.
+ patch_size (`int`, defaults to `16`): The size of the patches.
+ in_channels (`int`, defaults to `3`): The number of input channels.
+ embed_dim (`int`, defaults to `768`): The output dimension of the embedding.
+ layer_norm (`bool`, defaults to `False`): Whether or not to use layer normalization.
+ flatten (`bool`, defaults to `True`): Whether or not to flatten the output.
+ bias (`bool`, defaults to `True`): Whether or not to use bias.
+ interpolation_scale (`float`, defaults to `1`): The scale of the interpolation.
+ pos_embed_type (`str`, defaults to `"sincos"`): The type of positional embedding.
+ pos_embed_max_size (`int`, defaults to `None`): The maximum size of the positional embedding.
+ """
def __init__(
self,
@@ -289,7 +350,15 @@ def forward(self, latent):
class LuminaPatchEmbed(nn.Module):
- """2D Image to Patch Embedding with support for Lumina-T2X"""
+ """
+ 2D Image to Patch Embedding with support for Lumina-T2X
+
+ Args:
+ patch_size (`int`, defaults to `2`): The size of the patches.
+ in_channels (`int`, defaults to `4`): The number of input channels.
+ embed_dim (`int`, defaults to `768`): The output dimension of the embedding.
+ bias (`bool`, defaults to `True`): Whether or not to use bias.
+ """
def __init__(self, patch_size=2, in_channels=4, embed_dim=768, bias=True):
super().__init__()
@@ -675,6 +744,20 @@ def get_2d_rotary_pos_embed(embed_dim, crops_coords, grid_size, use_real=True):
def get_2d_rotary_pos_embed_from_grid(embed_dim, grid, use_real=False):
+ """
+ Get 2D RoPE from grid.
+
+ Args:
+ embed_dim: (`int`):
+ The embedding dimension size, corresponding to hidden_size_head.
+ grid (`np.ndarray`):
+ The grid of the positional embedding.
+ use_real (`bool`):
+ If True, return real part and imaginary part separately. Otherwise, return complex numbers.
+
+ Returns:
+ `torch.Tensor`: positional embedding with shape `( grid_size * grid_size, embed_dim/2)`.
+ """
assert embed_dim % 4 == 0
# use half of dimensions to encode grid_h
@@ -695,6 +778,23 @@ def get_2d_rotary_pos_embed_from_grid(embed_dim, grid, use_real=False):
def get_2d_rotary_pos_embed_lumina(embed_dim, len_h, len_w, linear_factor=1.0, ntk_factor=1.0):
+ """
+ Get 2D RoPE from grid.
+
+ Args:
+ embed_dim: (`int`):
+ The embedding dimension size, corresponding to hidden_size_head.
+ grid (`np.ndarray`):
+ The grid of the positional embedding.
+ linear_factor (`float`):
+ The linear factor of the positional embedding, which is used to scale the positional embedding in the linear
+ layer.
+ ntk_factor (`float`):
+ The ntk factor of the positional embedding, which is used to scale the positional embedding in the ntk layer.
+
+ Returns:
+ `torch.Tensor`: positional embedding with shape `( grid_size * grid_size, embed_dim/2)`.
+ """
assert embed_dim % 4 == 0
emb_h = get_1d_rotary_pos_embed(
| [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civilization:
r"""
Function that creates a simulation.
Args:
parameter_1 (`str` or `List[str]`):
Description of game level.
parameter_2 (`int`, *optional*):
Kardashev scale of civilization.
parameter_3 (`float`, defaults to `42.0`):
Difficulty scale.
Returns:
[`~simulations.objects.Civilization`]
A civilization simulation with provided initialization parameters.
"""
```
Some examples that don't follow the docstring convention are:
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L89): missing explanations
- [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L132): does not contain mixin-related documentation whereas as [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154) does
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/utils/import_utils.py#L672): function explanation after "Args", but should be before
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py#L14): same reason as above
- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L518): incorrect indentation
There are also many places where docstrings are completely missing or inadequately explained. If you feel something needs an improvement, you can open a PR with your suggestions too! Additionally, type hints are not appropriate/correctly used at many occurrences and mismatch the accompanying docstrings - these could use an improvement too!
Please limit your PRs to changes to a single file in each PR. Changes must be only related to docstrings/type hints. Feel free to ping either @yiyixuxu, @stevhliu or me for reviews.
| Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue 🙂
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contribution guide (and philosophy, if you're interested)
I'll take up some of these.
I’m also interested in this work. Could you let me know about the current progress? @SubhasmitaSw @charchit7
Hey @yijun-lee, I've been caught up with some work, unfortunately, but I’ll work on this over the weekend or later today. If you want to get started on any of these tasks, feel free to go ahead and let us know, so we can pick up whatever is left.
Oh, actually, I think I’ll be starting this weekend as well. If we proceed separately, it would be good to inform each other through comments or other means. Have a good day :) @charchit7
Sure, @yijun-lee, that works!
you too :) | 1,728,042,903,000 | null | Feature Request | [
"src/diffusers/models/embeddings.py:get_3d_sincos_pos_embed",
"src/diffusers/models/embeddings.py:get_2d_sincos_pos_embed",
"src/diffusers/models/embeddings.py:get_2d_sincos_pos_embed_from_grid",
"src/diffusers/models/embeddings.py:get_1d_sincos_pos_embed_from_grid",
"src/diffusers/models/embeddings.py:PatchEmbed",
"src/diffusers/models/embeddings.py:LuminaPatchEmbed",
"src/diffusers/models/embeddings.py:get_2d_rotary_pos_embed_from_grid",
"src/diffusers/models/embeddings.py:get_2d_rotary_pos_embed_lumina"
] | [
"src/diffusers/models/embeddings.py:PatchEmbed",
"src/diffusers/models/embeddings.py:LuminaPatchEmbed"
] |
|
sktime/sktime | sktime__sktime-7221 | 0f75b7ad0dce8b722c81fe49bb9624de20cc4923 | diff --git a/sktime/datatypes/_adapter/polars.py b/sktime/datatypes/_adapter/polars.py
index e1fdd5f3ab7..e8138e4faa9 100644
--- a/sktime/datatypes/_adapter/polars.py
+++ b/sktime/datatypes/_adapter/polars.py
@@ -226,22 +226,31 @@ def check_polars_frame(
# columns in polars are unique, no check required
+ if lazy:
+ width = obj.collect_schema().len()
+ columns = obj.collect_schema().names()
+ dtypes = obj.collect_schema().dtypes()
+ else:
+ width = obj.width
+ columns = obj.columns
+ dtypes = obj.dtypes
+
if _req("is_empty", return_metadata):
- metadata["is_empty"] = obj.width < 1
+ metadata["is_empty"] = width < 1
if _req("is_univariate", return_metadata):
- metadata["is_univariate"] = obj.width - len(index_cols) == 1
+ metadata["is_univariate"] = width - len(index_cols) == 1
if _req("n_features", return_metadata):
- metadata["n_features"] = obj.width - len(index_cols)
+ metadata["n_features"] = width - len(index_cols)
if _req("feature_names", return_metadata):
- feature_columns = [x for x in obj.columns if x not in index_cols]
+ feature_columns = [x for x in columns if x not in index_cols]
metadata["feature_names"] = feature_columns
if _req("dtypekind_dfip", return_metadata):
index_cols_count = len(index_cols)
- dtype_list = obj.dtypes[index_cols_count:]
+ dtype_list = dtypes[index_cols_count:]
metadata["dtypekind_dfip"] = _polars_dtype_to_kind(dtype_list)
if _req("feature_kind", return_metadata):
index_cols_count = len(index_cols)
- dtype_list = obj.dtypes[index_cols_count:]
+ dtype_list = dtypes[index_cols_count:]
dtype_kind = _polars_dtype_to_kind(dtype_list)
metadata["feature_kind"] = _get_feature_kind(dtype_kind)
| [ENH] `polars` schema checks - address performance warnings
The current schema checks for lazy `polars` based data types raise performance warnings, e.g.,
```
sktime/datatypes/tests/test_check.py::test_check_metadata_inference[Table-polars_lazy_table-fixture:1]
/home/runner/work/sktime/sktime/sktime/datatypes/_adapter/polars.py:234: PerformanceWarning: Determining the width of a LazyFrame requires resolving its schema, which is a potentially expensive operation. Use `LazyFrame.collect_schema().len()` to get the width without this warning.
metadata["n_features"] = obj.width - len(index_cols)
```
These should be addressed.
The tests to execute to check whether these warnings persist are those in the `datatypes` module - these are automatically executed for a change in the impacted file, on remote CI.
| 1,727,991,899,000 | null | Performance Issue | [
"sktime/datatypes/_adapter/polars.py:check_polars_frame"
] | [] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.