repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
jasonish/evebox | 54208499 | Title: colorization of JSON output
Question:
username_0: Like Kibana3 can do. It has 'raw' and 'json', where the former has some nice colors for easier reading.
Answers:
username_1: I had played with this before so had some code sitting around. I've not added a raw view (yet), but the colourized code cut and pastes out just find so I'm not sure if its really necessary.
The colours on the other hand may not be the best choice...
Status: Issue closed
username_0: I agree that the raw view may not be needed then. Is the color stuff in master? I think I'm running that but not seeing colors :)
username_1: Yes, its commit <PASSWORD>.
Should look something like:

username_0: Ah, didn't realize you had just pushed it :)
Works nicely, thanks! |
plotly/plotly.js | 1005390441 | Title: The same marker color is displayed differently, depending on marker size
Question:
username_0: The first plot has all markers of constant size, size=10, and `marker_color='rgb(255,200, 0)'.

Updating size to an array, the color looks differently (it has a lower saturation):

Answers:
username_1: Yes, when you scale markers by size, the engine automatically lowers the opacity for some historical reason. You can force it back up to 1 to avoid this.
username_0: Historical reason?!!! :)
username_1: My way of saying "I don't really love this design decision but we're stuck with it now for backwards-compatibility" :)
Status: Issue closed
|
thelarchmage/Tea_Cozy | 428331926 | Title: Summary
Question:
username_0: Hey Riley, again really great work on your project Teacozy, it's clear you have an excellent understanding of HTML and CSS basics. Spend a little more time reviewing/practicing upon the feedback until it clicks, and I'm confident you'll do fine.
Also keep some HTML/CSS best practices in mind when coding
https://css-tricks.com/snippets/css/a-guide-to-flexbox/
and https://hackernoon.com/flexbox-s-best-kept-secret-bd3d892826b6
When in doubt Google it (or MDN it) 🎉
Answers:
username_0: **Next steps:** you can update your project to include any of the suggestions I addressed today. Also if you choose to you can simply upload a new github repo, updated to reflect any feedback 👍
Ok Rily keep up the good work and happy coding! |
dart-lang/sdk | 523954182 | Title: Dart Analyzer Error
Question:
username_0: Analyzer Feedback from IntelliJ
## Version information
- `IDEA AI-191.8026.42.35.5977832`
- `2.7.0-edge.d45c3d15cb3cea0104a87697c085259666eec528`
- `AI-191.8026.42.35.5977832, JRE 1.8.0_202-release-1483-b03x64 JetBrains s.r.o, OS Windows 10(amd64) v10.0 , screens 1920x1080`
## Exception
```
Dart analysis server, SDK version 2.7.0-edge.d45c3d15cb3cea0104a87697c085259666eec528, server version 1.27.4, FATAL error: Failed to handle request: {id: 4, method: analysis.setAnalysisRoots, params: {included: [C:\kharidbazaar], excluded: [C:\kharidbazaar\.idea, C:\kharidbazaar\.dart_tool, C:\kharidbazaar\.idea, C:\kharidbazaar\.pub, C:\kharidbazaar\build]}, clientRequestTime: 1573977471125}
FileSystemException: Directory watcher closed unexpectedly, path = 'C:\kharidbazaar'
#0 _FileSystemWatcher._listenOnSocket.<anonymous closure> (dart:io-patch/file_patch.dart:309:11)
#1 _ExpandStream._handleData (dart:async/stream_pipe.dart:248:23)
#2 _ForwardingStreamSubscription._handleData (dart:async/stream_pipe.dart:164:13)
#3 _rootRunUnary (dart:async/zone.dart:1136:13)
#4 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
#5 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
#6 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:336:11)
#7 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:263:7)
#8 _SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:764:19)
#9 _StreamController._add (dart:async/stream_controller.dart:640:7)
#10 _StreamController.add (dart:async/stream_controller.dart:586:5)
#11 new _RawSocket.<anonymous closure> (dart:io-patch/socket_patch.dart:1384:35)
#12 _NativeSocket.issueReadEvent.issue (dart:io-patch/socket_patch.dart:890:18)
#13 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#14 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#15 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:116:13)
#16 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:173:5)
```
For additional log information, please append the contents of
file://C:\Users\parth\AppData\Local\Temp\report1.txt.
Answers:
username_1: Thanks for reporting this issue! This was previously reported at https://github.com/dart-lang/sdk/issues/38853; please follow along there.
Status: Issue closed
username_1: Duplicate of #38853 |
jupyterlab/jupyterlab | 276933394 | Title: Colors in markdown
Question:
username_0: Jupyter notebook allows adding color similar to HTML. <font color='color'> text </font>. This however does not work in jupyterlab.
Answers:
username_1: Hi @username_0, this is related to https://github.com/jupyterlab/jupyterlab/issues/1812. There isn't a currently maintained CSS sanitizer that we are aware of. Closing in favor of that issue. Cheers!
Status: Issue closed
|
cloud-hypervisor/vhost-user-backend | 735434047 | Title: BSD-3-Clause license missing
Question:
username_0: I've just noticed this crate is just licensed as Apache-2.0, while Cloud Hypervisor has an hybrid Apache-2.0+BSD-3-Clause licensing model.
Was this done on purpose or just something coming from the crate template?
Answers:
username_0: @username_1 @sameo ping?
username_1: This code was newly written so wouldn't be covered by the BSD-3-Clause code coming (originally) from crosvm
username_0: Hmm.. I'm fine with having only the Apache-2.0 license, but given that the commits come from CH, which is covered by both licenses, I understand it should be covered by both too.
username_1: Each file in CH has an individual license. These files are Apache 2.0.
username_0: Oh, yes, you're right. Thanks for the clarification!
Status: Issue closed
|
z-edit/zedit | 766965903 | Title: "Maximum plugin count of 254 reached"
Question:
username_0: When running a patcher, I get this error message (paths shortened):
```
[ERROR] Error: Failed to create new element at: 0, "ENBLight Patch.esp"
Maximum plugin count of 254 reached.
at helpers.Fail (\ZEdit\resources\app.asar\node_modules\xelib\lib\helpers.js:63:15)
at \ZEdit\resources\app.asar\node_modules\xelib\lib\elements.js:37:21
at helpers.GetHandle (\ZEdit\resources\app.asar\node_modules\xelib\lib\helpers.js:80:9)
at Object.AddElement (\ZEdit\resources\app.asar\node_modules\xelib\lib\elements.js:35:20)
at preparePatchFile (eval at value (file:////ZEdit/resources/app.asar/app/app.js:749:18), <anonymous>:672:35)
at Object.run (eval at value (file:////ZEdit/resources/app.asar/app/app.js:749:18), <anonymous>:690:25)
at build (eval at value (file:////ZEdit/resources/app.asar/app/app.js:749:18), <anonymous>:220:52)
at Array.forEach (<anonymous>)
at eval (eval at value (file:////ZEdit/resources/app.asar/app/app.js:749:18), <anonymous>:288:36)
at Object.try (file:////ZEdit/resources/app.asar/app/app.js:10489:13)
```
I __do__ have more than 255 plugins - however, only if counting ESL flagged ones. Looks like zEdit counts them to the plugin limit...
Answers:
username_0: Looking at the log message, I suppose, the issue is withing [xelib](https://github.com/z-edit/xelib).
username_1: zEdit in its current state forces all plugins, including ESL and ESP-FE, to count towards the plugin limit, due to it not being updated to support the new formats. Hopefully this gets fixed in the future, but I've just simply moved on after coming across the same issue 6 months ago. |
amnag94/Accident-Prediction | 598598976 | Title: Extract features from an image using VGG-16
Question:
username_0: Any optimization done for training or testing the model should not affect the evaluation function.
The evaluation function should take a video sequence and result in probabilities for each frame and make the final prediction if the probability is above a certain threshold (decided before hand)
Answers:
username_0: `
# vgg16 = models.vgg16(pretrained=True, progress=True)
# this will download the entire model of 536 MB, as it has 138 million learnable params
class VGG_FEATURES(nn.Module):
def __init__(self, original_model):
super(VGG_FEATURES, self).__init__()
self.features = original_model.features
self.avgpool = original_model.avgpool
# remove last 6 layers
self.classifier = nn.Sequential(*list(original_model.classifier.children())[:-6])
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
`
username_0: # transformation
```python
from torchvision import transforms
from PIL import Image
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
img_path = 'dataset/train/videoclips/clip_1/000017.jpg'
input_image = Image.open(img_path)
input_tensor = preprocess(input_image)
input_tensor.shape
input_batch = input_tensor.unsqueeze(0)
op = vgg16(input_batch)
```
Status: Issue closed
|
Dev-Eritas/Dev-Eritas | 531110352 | Title: Crear el header y footer para la pagina
Question:
username_0: ## User Story
**COMO** DevEritas
**QUIERO** crear archivos HTML y CSS para el header y footer
**PARA QUE** se puedan implementar en todas las paginas
## Criterio de Aceptacion
- [ ] Codigo HTML y CSS en algun branch.
- [ ] Codigo merged en el master.
Answers:
username_1: Cree un branch que es index para que ahi este la base de header y footer, no hice todavia merge solo el pull request
username_1: En la parte del footer puse 3 iconos (instagram,facebook, twitter) para que se los redireccione las páginas
Status: Issue closed
|
InZidiuZ/op-framework-issue-tracker | 798953560 | Title: Bullets Penetrate Vehicles
Question:
username_0: **Assurance:**
Have you read through the rules from the `README.md` file in the root folder of this GitHub repository?
Yes I Have.
**Summary:**
Getting bullets to penetrate vehicles just enough to be able to hit the passengers but not behind the car itslef
**Reason:**
So far, most of the shootouts are in one of two places, Rooftops and Cars
rooftops don't seem to be an issue at all since that can be resolved with actual skill, but car shootouts are a different story since PD doesn't have automatic weapons that can be shot from cars,
meaning that an officer in the ground with an SMG will always lose to a suspect on an EVO with a full auto skorpion because of:
-Better firepower
-Moving cover
-Faster Speed
and even in vehicle combat it's mostly a full auto gun vs a combat pistol
I've been on plenty of shootouts where they have a car with full auto weapons going around people in rooftops and being able to shoot down officers that are on foot because of the latter.
Summing up, vehicle combat is broken for PD, 4 officers can't do anything against one guy with an uzi if we have no cover.
and realistically, cars just don't even stop bullets.
In my opinion: removing the ability to take cover inside of a car would drastically shift the vehicle combat meta in a way that would force people to think about better ways to engage in gunplay, therefore balancing the Police vs Criminal combat, this is a healthy way to fix the vehicle combat without fully removing it.
LINK FOR REFERENCE:
https://www.gta5-mods.com/misc/bullets-penetrate-cars |
the-difference-engine/ymim | 445214272 | Title: Events Page - Design
Question:
username_0: Utilize the master style guide in the Website Style Epic to design the Events page. Use the image below for reference. Note that social media icons will match the color, design, and scheme as on Landing Page.
Answers:
username_1: let's hold off on this until after Proof of Concept demo with the client (set for this week).
username_2: Change design of Event Page cards to mimic the events cards on EventBrite. Image on top, next the title, then date and time. Remove description.

Change the default image from the logo.png to ymim1.png or other rectangular image.
We should also have 3 cards across be the default and be centered on the page. As the browser size changes the number of cards will adapt to that.
Have a past events section on the bottom that will put the events before the current date sorted from most recent to oldest. Have the past events limited to previous year.
Status: Issue closed
|
AFLplusplus/AFLplusplus | 771115921 | Title: Add manual mechanism to motivate AFL to pursue a particular input
Question:
username_0: If I know that a particular region of code is leading to something potentially problematic, it seems like it would be good to add a mechanism to manually indicate that to AFL? I imagine the API would be something like adding `__afl_this_is_interesting()` to the code -- or maybe `__afl_this_is_interesting(index)`?
Not sure how involved this might be.
Answers:
username_1: IMHO you would just run another afl-fuzz instance (e.g. with -S temp -o same_o_parameter -i input_dir) and in that input directory you place the file that is interesting. as it will be only one file it will be guaranteed to be fuzzed first.
you could also just instrument that parts in the code that you are interested in.
username_0: The use case I'm thinking of is if I haven't yet discovered an input that has led to this code path, but prior static analysis indicates that if it ever got there then it would be *really* good to go down that path. Sure, I could figure out a way to create the input, but it seems like if I could motivate AFL to go down that path it would save some time.
username_0: Another workaround would be to just add an abort() once it gets to the code path, then do the input thing that you mention. But, then I have to watch AFL until it finds the interesting input.
I might look into adding this functionality if I had some pointers for how one might add it. I've been meaning to jump into the internals but haven't had the time yet.
username_1: you could have one binary where you instrument just the functions that lead to where you want to go, then automatically it would direct the fuzzing to that goal. you can use the selected instrumentation feature for that.
username_0: Thinking about it a bit more, I guess a better rephrasing of what I'm looking for is a way to weight a specific location so that AFL will value it more than other paths through the executable? Disabling selected instrumentation probably doesn't help there -- after all it's possible those other locations could lead to interesting results too.
username_1: ah you mean you want a guided fuzzing feature.
I fear this is something out of scope for afl++, as this would be a huge change. take a look at https://github.com/aflgo/aflgo which does exactly that.
username_2: Or have a look at injon's waypoints:
https://github.com/RUB-SysSec/ijon
This can work as secondary node, to discover more paths, with afl++ as main.
username_0: Thanks for the ideas!
Status: Issue closed
username_0: @username_1 @username_2 is there any interest in receiving IJON's changes in AFL++? The custom mutator support is pretty useful, don't really want to backport it onto IJON. :)
I've already taken the pieces of IJON that don't relate to maximizing values and ported them in AFL++ in the LTO pass, it was fairly trivial. Seems to be making some progress.
Because it directly affects the instrumentation internals and adds an extra load + xor per instrumentation block, I imagine the best way to enable/disable in AFL++ would be either with a separate compiler executable name (afl-clang-lto-ijon ? ) or by defining a macro to enable it.
Let me know if there's interest. If you think it's out of scope for AFL++, that's fine too.
username_2: If I know that a particular region of code is leading to something potentially problematic, it seems like it would be good to add a mechanism to manually indicate that to AFL? I imagine the API would be something like adding `__afl_this_is_interesting()` to the code -- or maybe `__afl_this_is_interesting(index)`?
Not sure how involved this might be.
username_1: @username_0 can you please give context what you are trying to achieve and why this is important for you?
I looked everywhere and cannot find a single repository or code using ijon. The burden of adding this to a target is high, so my guess is this is the reason for it.
The results are impressive when done, however I question the usefulness. as a developer you will usually have example input that reaches your code that you want to test, otherwise you would never know if it actually works. a bughunter on the other hand would not invest the time for ijon. things like crc, hashes etc. should be removed for fuzzing anyway (as it is also documented everywhere).
I am not saying no - just saying I need to hear good arguments :)
maybe @eqv and @schumilo want to chime in.
username_0: As stated, one of the goals for their set of changes is for a tool to help expand your search once coverage has been exhausted, and that's my motivation as well.
I created a commit showing a diff between AFL 2.51b and their fork at https://github.com/username_0/ijon/commit/dbb80e4058dc196caf5f95ab1297400ccee31a02 -- ignoring their min/max stuff, the changes to the compiler are actually fairly small.
username_1: hmm it is actually quite a large change. the changes that you see in the afl-llvm-pass have to be done in LLVMInsTrim.so.cc,
SanitizerCoverageLTO.so.cc and SanitizerCoveragePCGUARD.so.cc as well.
plus the coverage on/off functionality has to be implemented differently as this way costs too much IMHO.
and all that for something that nobody seems to be using. so I am still not convinced ...
why not running 1-2 ijon tasks in the fuzzing campaign same as cmplog, laf, eclipser etc.?
e.g. fairfuzz is also a nice fuzzer but it does not fit into afl++ as it would result in too many changes. but it can be run in parallel to afl++ (or any other variant).
Status: Issue closed
username_0: Fair enough. |
postcss/postcss-mixins | 1015002424 | Title: Mixin error with expression in nth-shild
Question:
username_0: I have some troubles with passing $number to nth-child expression in mixin. For now it works good if I delete calculations inside nth-child and leave just div:nth-child($number) .
Would be thankful for helping to resolve it.
@define-mixin mixinName $contusername_1nerName, $number: 12 {
:global(.desktop){
& .$(contusername_1nerName) > div:nth-child(30n + 22 + #{$number}) {
/* some code */
}
}
}
@mixin mixinName contusername_1ner1, 9;
Answers:
username_1: The correct syntax:
```scss
@define-mixin mixinName $contusername_1nerName, $number: 12 {
:global(.desktop){
& .$(contusername_1nerName) > div:nth-child(30n + 22 + $number) {
/* some code */
}
}
}
@mixin mixinName contusername_1ner1, 9;
```
But it will generate `:nth-child(30n + 22 + 9)`, which is not correct `nth-child` syntax.
Solutions:
1. Remove `+ 22` and use `31` in `@mixin mixinName contusername_1ner1, 31`.
2. Use JS mixins to `selector: & .${contusername_1nerName} > div:nth-child(30n + ${number + 22})` https://github.com/postcss/postcss-mixins#function-mixin
Status: Issue closed
username_0: @username_1 Many thanks, the second solution looks pergect for me, but unfortunately doesn't works, may be because of project dependencies troubles. The first one works good |
dotkernel/frontend | 454741294 | Title: add error handler
Question:
username_0: please add as default package the error handler package
to be sure all systems are ready to log the errors
Answers:
username_1: DotKernel's own error handler library has been added to the Frontend application.
Errors will be logged to: {PROJECT_ROOT}/log/error-log-{CURRENT_DATE}.log
Status: Issue closed
|
expo/examples | 1152081093 | Title: running with-auth0 example always gives: result = null
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
Running the with-auth0 example doesn't retrieve the result.
UseEffect is called but result always stays null.
The url on return contains the token but that isn't picked up: "http://localhost:19006/#id_token=<KEY>"
**To Reproduce**
Steps to reproduce the behavior:
1. create new project with 'expo init authtest'
2. Install the dependencies with expo install, choose empty project
3. Copy the contents of the App.js of the example to that one of the project
4. Fill in the client id and url., fill in the callback url in Auth0
5. run 'npm run web'
6. Press the login button
7. Login with user in the Auth0 window
8. The browser returns to the http://localhost:19002 with the id_token but it isn't picked up.
**Expected behavior**
I expected to get an non-null result
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- Windows 11
- Firefox and Chrome
Answers:
username_1: @username_0 I found this issue helped me https://github.com/expo/examples/issues/289#issuecomment-927322312
username_0: Hi @username_1
Thanks for your reply and help.
I tried adding those two lines and it didn't work in my standard browser (Firefox).
As a double check i ran the example with Chrome and Edge. In those browsers i get a successful response with the token in it.
Do you maybe have an short git repo example that works for you that i could try?
Cheers
username_0: Found it... Setting in firefox: 'Open link in new tab instead of a window' was the culprit.
Turning this of in the browser fixes it...
I'll close this one.
Status: Issue closed
|
npm/cli | 1040685239 | Title: [BUG] root package.json script not utilising workspace argument
Question:
username_0: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
If I run `npm run build --workspace=project` the project workspace is built as expected.
If I however put a script in the root package.json: `"buildproject": "npm run build --workspace=project"` and then run `npm run buildproject` it fails (missing script: build).
### Expected Behavior
Running the script inside the package.json should do the same thing as running the script from the command line.
### Steps To Reproduce
1. In the workspace root, create a package.json script that uses the --workspace flag
2. Attempt to run that script
### Environment
- OS: Windows
- Node: 16.13.0
- npm: 8.1.0 |
kids-first/kf-portal-ui | 416820674 | Title: Query Summary: Study Card data hookup is wrong
Question:
username_0: There are some issues with the data hookup for the study card - it is both incorrect and not responding correctly to queries.
- [ ] incorrect data example

- [ ] incorrect response to querying example

Answers:
username_0: Testes - fillltering on studies is now responding correctly. Seems the participant count for study `Disorders of Sex Development` is still wrong but we can close this ticket and investigate separately with the index to see if something wrong is happening
Status: Issue closed
|
spring-projects/spring-boot | 315117433 | Title: Spring Boot 1.5.3.RELEASE actuator endpoint redirect to a login page when management.security.enabled is false
Question:
username_0: my project used Spring Boot 1.5.3.RELEASE and Spring Cloud Dalston.SR4
I have already set
```yaml
security:
basic:
enabled: false
management:
security:
enabled: false
```
in bootstrap.yml
But the endpoint of actuator redirect to a login page sometimes.
- I find when endpoint status was 302, the bean of
`org.springframework.boot.actuate.autoconfigure.EndpointHandlerMapping`
is missing.
- In class
`
org.springframework.boot.actuate.autoconfigure.ManagementWebSecurityAutoConfiguration
`
can't get EndpointHandlerMapping through
`
endpointHandlerMapping = context.getBean(EndpointHandlerMapping.class);
`
- As a result, variable delegate only have "/login" and "/error" two endpoint. So the actuator endpoints have been redirected.
It seems like `@ConditionalOnMissingBean` have some problems.
What should I do?
Thanks.
Answers:
username_1: There's nothing that really indicates a bug in Spring Boot and it' impossible to tell what is going on without a sample that we can run ourselves. Can you share one please?
username_2: If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
username_2: Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.
Status: Issue closed
|
tpm2-software/tpm2-tss | 1088422981 | Title: FAPI crashes when creating key under the Edorsement Key.
Question:
username_0: I want to create an AIK under the EK. I tried using tss2_createKey and the FAPI function`Fapi_CreateKey`. both work for creating a key under the SRK or in the Endorsement hirarchy but when i supply the path to the EK I get a segmentation fault:
```bash
$ tss2_provision
$ tss2_createkey -p /SRK/AK -t sign -a ""
$ tss2_createkey -p /EK/AK -t sign -a ""
Segmentation fault (core dumped)
````
My C code using FAPI has the same results therefore i post this here and not in tools.
I am using a docker image based on ubuntu 18.04 which builds tss from source. It uses the ibmswtpm version 1332 which does not come with an EK certificate therefore i disabled the EK certificate check in the fapi config. Is this a bug or am I missing something?
Answers:
username_1: @username_0 No you are not missing anything. I can confirm the bug and will create PR to fix it.
Status: Issue closed
|
trashgomi/mojique-translation | 562049113 | Title: Manaphylactic Priella
Question:
username_0: **Number of Lines**: 86
**Scene Description**:
Merchant dialogue. Fight with Priella. Scene afterwards, and chats.
#### Common
Lines 221-3, 681-3, 1124-6, 1510-2, 1806-7
#### Map227
Lines 6-7
#### Map407
All 30 lines
#### Map408
All 40 lines<issue_closed>
Status: Issue closed |
RichyHBM/Monochromatic | 405653794 | Title: Per app setting
Question:
username_0: Would be awesome if monochromatic could be automatically disabled for specific user selectable apps while they're in foreground, then reenable when that app is no longer in foreground.
Example could be camera, gallery, stuff like that.
Answers:
username_1: I really like this idea, will investigate and hopefully add it for the next point feature
username_1: Seem to have this working in the white list branch, I'm going to try it out on my device for a few days to make sure there aren't any obvious bugs and then update!
Status: Issue closed
username_1: Pushed the update!
username_0: Awesome, works great! |
xiaohu2015/SwinT_detectron2 | 1163061946 | Title: version of torch
Question:
username_0: Hi,
Thanks for sharing your Code. I tried to run convert_to_d2.py file to create model file. But i have an error and I guess it's because of torch version. Can you help me about step of running and please tell which version torch i should to use.
Thank you
Answers:
username_1: maybe you can show some error message. For torch, I recommend 1.10+
username_0: thanks for your answer.
I want to use of Swint in fsdet project as backbone. it use of torch 1.6.0.
my Error is:
data = pickle.load(f,encoding="latin1")
_pickle.UnpicklingError: A load persistent id instruction was encountered, but no persistent_load function was specified.
Thank you
username_1: it seems encoding iusse, you can utf-8 in your environment.
username_0: it's defined encoding for detectron2, i tried it by encoding="bytes" too (with [this reference](https://docs.python.org/3/library/pickle.html#pickle.load))
Is your code compatible with torch version 1.6 ?
Can you share your .pkl file direct link for download?
Thanks
username_1: @username_0 hi, you should save weights as a `pth` file rather than `pkl`, https://github.com/username_1/SwinT_detectron2/blob/main/convert_to_d2.py#L16
username_0: Hi, yes I did same work. Thanks...
but i have encoding error yet.
username_1: @username_0 it is very strange...
username_0: Can you tell me please, you used of which version of detectron2 ?
I check it with detectron2 = 0.6
torch =1.10.0
and now i have this Error:
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I guess I'm using of different version!
Thanks
username_1: yes, detectron2 = 0.6
torch =1.10.0
username_0: @username_1 Thank you so much.
Status: Issue closed
|
soulcutter/saxerator | 85490301 | Title: oga support
Question:
username_0: oga is yet another xml engine: https://github.com/YorickPeterse/oga
Answers:
username_1: Yeah, I recently came across oga - it would be great to include support for that library as well. Just need to create an adapter foundation and a bunch of these should fall into place pretty easily.
Status: Issue closed
|
pytorch/pytorch | 481746519 | Title: Migrate `baddbmm` and `baddbmm_` from the TH to Aten (CUDA)
Question:
username_0: Porting TH operators is essential for code simplicity and performance reasons.
Porting guides and Q&A are available in umbrella issue: #24507
Feel free to add @username_0 as a reviewer to get a prioritized review.
Answers:
username_1: Self assigning this for a fix of #29984
cc: @csarofeen @ptrblck for visibility
username_2: @username_1 are you still planning to work on this?
username_2: making it hi priority because it's important for complex tensors
username_1: @username_1 No, I am not working on this right now. |
emergenzeHack/covid19italia_segnalazioni | 602810200 | Title: Le realtà di Novate, CRP – Comunità di Relazioni Positive, ANFFAS ONLUS Bollate Novate, La Tenda ODV
Question:
username_0: <pre><yamldata>
Da_chi_offerta: ANFFAS Bollate Novate, CRP - Comunità di Relazioni Positive, La Tenda
ODV
Descrizione: "Le realtà di Novate, CRP – Comunità di Relazioni Positive, ANFFAS ONLUS\
\ Bollate Novate, La Tenda ODV, insieme ad un gruppo di volontari, hanno dato vita\
\ all'iniziativa “Gruppo di Prossimità” per sostenere le persone maggiormente in\
\ difficoltà a seguito delle restrizioni legate all’emergenza Coronavirus. \n\n\
Il Gruppo di Prossimità si rende disponibile, con il personale dipendente e volontario\
\ delle proprie realtà, a contattare telefonicamente le persone sole, le persone\
\ anziane, i ragazzi con una disabilità che non stanno frequentando i loro centri\
\ o qualsiasi altra persona ne avesse la necessità.\n\nCi proponiamo con questo\
\ di far sentire le persone meno sole e isolate, dare loro conforto, scambiare due\
\ parole, raccontare una storia o proporre una attività che possa essere fatta in\
\ casa, ascoltare le riflessioni e le preoccupazioni che possono insorgere in queste\
\ giornate di difficoltà e isolamento, raccogliere richieste di aiuto, indirizzare,\
\ qualora ce ne sia il bisogno, a servizi specifici e numeri utili attivi.\n\nMettiamo\
\ a disposizione un numero per essere contattati e per ricevere segnalazioni di\
\ persone che sono sole o che potrebbero beneficiare di questo servizio: ☎️ 370/3752763\
\ (numero dello Sportello di ANFFAS). Il numero è attivo tutti i giorni sabato e\
\ domenica inclusi, dalle 10.00 alle 21.00.\n\nInvitiamo chiunque a segnalarci situazioni\
\ di persone che passano tante ore da sole in casa o che sono particolarmente fragili\
\ in questo momento. E' un segno di attenzione verso l'altro e di interessamento.\n\
\nPossiamo essere contattati anche per ricevere manifestazioni di nuove disponibilità\
\ ad aiutare.\nRiteniamo che solo insieme, occupandoci gli uni degli altri, senza\
\ lasciare nessuno escluso, si possa superare positivamente questo difficile momento."
Destinatari: bambini, persone con disabilità, fruitori dei CSE, persone sole, anziani
Link: www.associazioneCRP.org
Natura: culturale-ricr
Posizione: 45.530493 9.130754 0 0
Titolo: Gruppo di Prossimità
</yamldata></pre> |
devinus/poison | 81245181 | Title: Erlang VM crashed when decoding invalid UTF-8 string
Question:
username_0: ```elixir
Poison.decode(<<34, 237, 160, 189, 237, 178, 142, 34>>)
```
String in the example is a CESU-8 encoded string, that's not a valid UTF-8 string, but the code crash erlang VM.
Answers:
username_1: @username_0 Good find. It looks like I've got an infinite loop somewhere. I'll investigate.
Status: Issue closed
|
vimeo/psalm | 1091349926 | Title: Psalm stopped seeing errors
Question:
username_0: <projectFiles>
<directory name="src/" />
<ignoreFiles>
<directory name="vendor" />
</ignoreFiles>
</projectFiles>
<issueHandlers>
<MixedPropertyTypeCoercion>
<errorLevel type="suppress">
<file name="src/Transport/Message/Params/SendMessageParams.php"/>
</errorLevel>
</MixedPropertyTypeCoercion>
</issueHandlers>
<stubs><file name="./config/bootstrap.php" /></stubs>
</psalm>
Note, I remove about 20 entries from `issueHandlers` which would trigger errors (just to test if psalm working).
**Command:**
`sudo -u wwwbm -E -H /usr/bin/php -d memory_limit=2048M vendor/bin/psalm`
**Output:**
Scanning files...
Analyzing files...
░
------------------------------
No errors found!
------------------------------
Checks took 2.14 seconds and used 217.018MB of memory
Psalm was unable to infer types in the codebase
I'm stuck. I'm not even sure at which moment it stopped showing errors, just in the middle of coding actually.
I know the question is too broad now, so I'll try to make it a little bit more strict:
Is there any way to debug psalm? Any kind of verbose mode, or maybe any internal debugging system exists for the lib? I'm literally going crazy.
**Psalm version:** 3.18.2
Status: Issue closed
Answers:
username_0: Solved by updating to the last psalm version without any changes to the code. Still don't know what was that.
username_1: :+1: |
apache/airflow | 755446266 | Title: KubernetesPodOperator can't mount sercret as volume
Question:
username_0: **Apache Airflow version**: 1.10.12 and 2.0.0b3
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.19.2
**Environment**:
- **Cloud provider or hardware configuration**: minikube
- **OS** (e.g. from /etc/os-release): Ubuntu 20.04
- **Kernel** (e.g. `uname -a`): 5.4.0-53
**What happened**:
With the kubernetePodOperator and KubernetesExecutor when I try to mount secret as volume the pod (the one in kubernetesPodOperator) didn't launch and the task return as failed. The error occur with either V1Volume object or airflow Secret object.
Persistent volume claim work perfectly.
**What you expected to happen**:
airflow launch a worker pod on kubernetes and the worker pod launch a pod on kubernetes
**How to reproduce it**:
```python
import airflow
from airflow import DAG
from kubernetes.client import models as k8s
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
from airflow.operators.dummy_operator import DummyOperator
from airflow.kubernetes.secret import Secret
default_args = {
'owner': 'debug',
'depends_on_past': False,
'start_date': airflow.utils.dates.days_ago(1),
}
pvc_volume = k8s.V1Volume(
name='ml-data',
persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='ml-data')
)
secret_volume = k8s.V1Volume(
name='deploy-key',
secret=k8s.V1SecretVolumeSource(default_mode=600, secret_name="dvc-deploy-key")
)
pvc_volume_mount = k8s.V1VolumeMount(
name='ml-data', mount_path='/data/', sub_path=None, read_only=False
)
secret_volume_mount = k8s.V1VolumeMount(
name='deploy-key', mount_path='/root/.ssh', sub_path=None, read_only=True
)
secret_file = Secret(deploy_type='volume',
deploy_target='/root/.ssh/',
secret='dvc-deploy-key')
dag = DAG(
"testing",
[Truncated]
```
**Anything else we need to know**:
I'm not sure if it's an airflow issue or a kubernetes-client issue
How often does this problem occur? Once? Every time etc?
This problem occur every time
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>worker.log</summary> airflow@testingpulldata:/opt/airflow$ airflow tasks run testing pull_data "2020-12-02T15:12:29.757041+00:00" --local --pool default_pool --subdir /opt/airflow/dags/test.py
[2020-12-02 15:30:19,007] {dagbag.py:440} INFO - Filling up the DagBag from /opt/airflow/dags/test.py
/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:26 DeprecationWarning: This module is deprecated. Please use `kub
ernetes.client.models.V1Volume`.
/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:27 DeprecationWarning: This module is deprecated. Please use `kub
ernetes.client.models.V1VolumeMount`.
Running <TaskInstance: testing.pull_data 2020-12-02T15:12:29.757041+00:00 [success]> on host testingpulldata
</details>
Answers:
username_1: Hello @username_0 ,
I Just wrote KubernetesPodOperator Guide Here.https://github.com/username_1/airflow/blob/k8s-docs/docs/apache-airflow-providers-cncf-kubernetes/operators.rst#mounting-secrets-as-volume
Example Dag in Guide runs fine and mounts Secret volume.
The worker.log given mentions Only DeprecationWarning which does not stop Pod from being launch IMHO.
username_2: @username_1 I read your document and I was wondering how do map a volume. I want to do the equivalent of
docker run -v /host/directory:/container/directory using airflow and kubernetespodoperator. So I can take an existing directory path available to airflow and map it to a directory path inside the container. I cannot find any examples of this.
username_1: There is an example of mounting host volume to pod :
Here is the link https://github.com/username_1/airflow/blob/k8s-docs/docs/apache-airflow-providers-cncf-kubernetes/operators.rst#mounting-persistent--volume-to-pod
In this example local machine's ``/tmp/myapp`` is mounted to pod
username_3: @username_2 are you still having issues mounting secrets or can this be closed?
username_2: This can be closed |
prettier/prettier-atom | 404153738 | Title: Installing “prettier-atom@0.56.2” failed
Question:
username_0: npm WARN deprecated sb-memoize@1.0.2: No longer maintained - Use lodash probably
npm WARN deprecated circular-json@0.3.3: CircularJSON is in maintenance only, flatted is its successor.
npm WARN deprecated browserslist@2.11.3: Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN enoent ENOENT: no such file or directory, open 'C:\Users\I328437\AppData\Local\Temp\apm-install-dir-119029-14664-1o68puz.vpda\package.json'
npm WARN apm-install-dir-119029-14664-1o68puz.vpda No description
npm WARN apm-install-dir-119029-14664-1o68puz.vpda No repository field.
npm WARN apm-install-dir-119029-14664-1o68puz.vpda No README data
npm WARN apm-install-dir-119029-14664-1o68puz.vpda No license field.
npm ERR! path C:\Users\I328437\AppData\Local\Temp\apm-install-dir-119029-14664-1o68puz.vpda\node_modules\prettier-atom\node_modules\prettier-stylelint\cli.js
npm ERR! code ENOENT
npm ERR! errno -4058
npm ERR! syscall chmod
npm ERR! enoent ENOENT: no such file or directory, chmod 'C:\Users\I328437\AppData\Local\Temp\apm-install-dir-119029-14664-1o68puz.vpda\node_modules\prettier-atom\node_modules\prettier-stylelint\cli.js'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\I328437\.atom\.apm\_logs\2019-01-29T07_15_50_147Z-debug.log
Status: Issue closed
Answers:
username_1: Closing as others aren't having this issue.
username_2: @username_1 I have the same, and https://github.com/AtomLinter/linter-eslint/issues/1275 too, any ideas wthat's wrong?
username_1: @username_2 Are you sure it's 100% the same error with the same underlying library? I would try uninstalling and re-installing your prettier-atom package to make sure you have the latest associated packages. |
fatih/vim-go | 1142336561 | Title: Minor Improvement to :GoFmt Requested
Question:
username_0: <!--
Before filing an issue, please check if vim-go's help addresses your problem (see `:help go-troubleshooting`).
Consider executing `:GoReportGitHubIssue` to populate much of this information automatically.
-->
### What did you do? (required: The issue will be **closed** when not provided)
I setup vim 8.2.4386 on Fedora 35 Server x86_64 to run :GoFmt when I write out a Go file.
This works fine, except because I make a lot of mistakes, the :GoFmt output display often doesn't
start at error #1. This means I have to scroll the :GoFmt up to the first error. I've attached a
screen dump to this report.
<!--
If possible, please provide clear steps for reproducing the problem.
-->
I only have to save a Go file.
### What did you expect to happen?
I'd like the error display to start with error #1
### What happened instead?
The error display starts with an error after #1.
### Configuration (**MUST** fill this out):
#### vim-go version:
I'm not sure how to see this. It's the version that I get today 2/17/2022.
#### `vimrc` you used to reproduce:
<!--
set number ff=unix nohlsearch noincsearch dir=/tmp/.vim autoindent mouse=a
set laststatus=2
set statusline=%f%m%r%h%w\ [%Y]\ %<\ %F%4v,%4l\ of\ %L\ lines
let g:loaded_matchparen=1
colorscheme industry
set gfn=GoMono:h14 autoread
augroup vimStartup
au!
" When editing a file, always jump to the last known cursor position.
" Don't do it when the position is invalid, when inside an event handler
" (happens when dropping a file on gvim) and for a commit message (it's
" likely a different one than last time).
autocmd BufReadPost *
\ if line("'\"") >= 1 && line("'\"") <= line("$") && &ft !~# 'commit'
\ | exe "normal! g`\""
\ | endif
autocmd! BufWritePre *.go GoFmt
augroup END
command NN set nonumber mouse=v
command NU set number mouse=a
if &term =~ '^xterm'
" Cursor in terminal:
" Link: https://vim.fandom.com/wiki/Configuring_the_cursor
" 0 -> blinking block not working in wsl
" 1 -> blinking block
" 2 -> solid block
" 3 -> blinking underscore
" 4 -> solid underscore
" Recent versions of xterm (282 or above) also support
" 5 -> blinking vertical bar
" 6 -> solid vertical bar
" normal mode
[Truncated]
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build198855228=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
#### gopls version
<details><summary><code>gopls version</code> Output:</summary><br><pre>
<!-- gopls version -->
golang.org/x/tools/gopls v0.7.5
golang.org/x/tools/gopls@v0.7.5 h1:8Az52YwcFXTWPvrRomns1C0N+zlgTyyPKWvRazO9GG8=
</pre></details>
<img width="754" alt="vim" src="https://user-images.githubusercontent.com/3916830/154618201-73759b47-1a65-4d02-9d1d-7ac54f128efe.PNG">
Answers:
username_1: What value are you using for `g:go_fmt_command`?
username_2: Open vim and run `:echo g:go_fmt_command`. That should show the current value.
username_0: This variable isn't defined. I've attached a screen dump of what I see right after trying to write out a Go file.
GoFmt is working fine, except for the irritation of not starting its window with the first error.
<img width="663" alt="fmt" src="https://user-images.githubusercontent.com/3916830/154748594-1691618c-a75f-4160-82d0-0bbff40a0377.PNG">
.
username_1: I'm having a hard time getting enough errors and of the right types to duplicate what you're seeing. Do you have a file I could use as a test case?
username_0: That file I'm showing in the screen dump is part of a much
larger package. I'll try create a minimal file that shows
the problem.
Thanks for looking at this.
Jon
username_1: I've been able to duplicate this with `g:go_fmt_command` set to `gofmt`, but not with the default, `gopls` -- `gopls` seems to abbreviate the number of errors in the cases I've tested with. That's ok, though, the overall effect is the same.
I don't see a good way to handle this from within vim-go, though. In my testing, the current cursor location in the location list window seems to depend on where the cursor is in the code window; I've seen the location list window be as you want it to be, as it is in your examples, and somewhere in the middle.
When vim-go opens the location list window, it sets the height according to `g:go_list_height`, and that seems to play some role in the behavior you're seeing, though I don't see a way to tell vim whether to reduce from the bottom of the window or from the top 🤔 |
desmarais-patrick/notes | 435916507 | Title: Establish a list of tools for this project
Question:
username_0: Which tools should we use for the first phase of this project:
* Documentation
* Project management
* Design user experience
* Design user interface
* Front-end development
* Back-end development
* Code testing
* Continuous integration
* Deployment
* Performance testing
Since there are sub-projects, not all tools should be listed. The general approach to deciding which tools and frameworks to use should be emphasized.
Consider adding a section in the Wiki for this. Maybe to include in the solution section.
Status: Issue closed
Answers:
username_0: Tools have been added to the [Architecture](/username_0/wiki/Architecture) page. |
jaromir-sukuba/a-p-prog | 757297711 | Title: PIC12F1840: Wrong device ID: 0000, expected: 1b80
Question:
username_0: Hi.
I have been trying to use this application on Windows to program a PIC12F1840, however, the `pp3` reports the following problem:
```
C:\>pp3.exe -c COM7 -t 12f1840 blink.X.production.hex
PP programmer, version 0.99
Wrong device ID: 0000, expected: 1b80
Check for connection to target MCU, exiting now
```
My is an Arduino Uno connected as following:
| Arduino pin | Target PIC pin |
|-|-|
| GND | GND |
| 5V | VDD |
| A3 | MCLR |
| A1 | ICSPDAT |
| A0 | ICSPCLK |
Note: I'm using series resistors (470 Ohms) because the target IC is in the board below being programmed via ICSP:

NOTE: the board was programmed via PICKit3 before, so the problem is not related to the board.
HEX file for testing in attachment, It is just a blink application (in `RA5` pin) generated from MCC using latest XC8 version.
[teste.X.production.hex.txt](https://github.com/username_3/a-p-prog/files/5644539/teste.X.production.hex.txt)
TIA for any help to solve it!
Answers:
username_1: It could be a PIC12LF1840; I bought on aliexpress 10 mcu marked as PIC12F1840 and got PIC12LF1840 instead. I don't know if it is a fake chip remarked or if it is a mistake of microchip.

username_2: I have the same issue. What I have is just PIC12F1840 soldered to 8-pin breakout board, and connected to Arduino Nano as username_0 described. At first I tried to use Arduino Mega at 3.3V to program it, but had the same issue, so I thought using Arduino Nano at 5V will help, but I still get device ID "0000". Any ideas what else to try?
username_2: According to the data sheet, Vpp on MCLR needs to be within 8-9V range. I found another project https://diyodemag.com/projects/arduino_pic_programmer they had to use charge pump to generate high voltage, since using 5V from Arduino is not sufficient, and adding pull-up to high voltage would kill the Arduino, so opto-isolator is necessary.
I'm not sure if reading the device ID supposed to work without 9V Vpp? If reading device ID happens after entering programming mode, then I guess it is impossible, because 12F1840 will not enter programing mode without 8-9V Vpp according to the datasheet.
username_2: After reading the datasheet carefully, I see that PIC12F1840 supports Low-Voltage Programming so in theory it should work using VDD only, without high voltage. But unfortunately so far I cannot make it work. I do not have high-voltage programmer so I cannot try high-voltage programming.
username_3: I personally verified PIC12F1840 and it worked. Could you show me your hardware connection?
username_2: Sure, I would appreciate any ideas what could be wrong.
```
Arduino pin | PIC12F1840 pin
GND | GND
5V | VDD
A3 | MCLR
A1 | ICSPDAT
A0 | ICSPCLK
```



It may be hard to see connection on the photos, especially the last one, but at very least it shows 5V (the red wire) is connected to pin 1 (VDD) and the brown wire to ping 8 (GND). Other connections are as described above, I checked them multiple times.
I also connected oscilloscope, and captured first sequence of signals after ICSPCLK was triggered for the first time. Yellow line is MCLR and for the duration of captured period it is about 0V, the green line is ICSPCLK and the red line is ICSPDAT.

PIC12F1840 is brand new, it was always connected with right polarity and wasn't overheated during soldering, and I checked that every wire conducts electricity as it should. From what I see in the oscilloscope, it seems to confirm I have used right Arduino pins. Unfortunately all I get is the error "Wrong device ID: 0000, expected: 1b80".
Am I doing something wrong, and if not, is it worth to try high voltage programming (I have no problem adding it in the circuit, but I did not figured out yet how to change the firmware)?
username_3: Hello,
I just tried to take fresh arduino compatible board, downloaded sources from repository, flashed the arduino board, connected 12F1840 on a piece of breadboard and it just worked.


Are you sure the PIC is new? Does it come from reputable source?
username_2: I got it from AliExpress seller, and they had a lot of good feedback, so this can be considered a reputable source. I double checked everything, and to my surprise I have discovered that ground on my soldering iron wasn't properly connected, and AC voltage was present on the soldering iron. Perhaps this is what killed it. I decided to buy one more 12F1840 locally because I couldn't wait 1-2 months again for a delivery from AliExpress, I really needed 12F1840 for an old project for which I had the source code, but porting it to modern microcontroller would take many days of work.
I of course fixed ground connection on my ESD-safe soldering iron before soldering. After that, I have soldered it and tried programming it, and it worked. I tested programming with 3.3V and 5V, in both cases programming finished successfully.
Perhaps my experience would be useful to somebody in the future (this thread comes up high in Google when searching for "Wrong device ID: 0000, expected 1b80"), but the point is, programming firmware and software works fine, my issue was because of dead 12F1840. I was not able to realise this right away because I was new to this and did not have another PIC to test at the time. |
pastelsky/bundlephobia | 396116440 | Title: sugest Grafoo over Apollo and Relay on similar packages
Question:
username_0: ## Type
Feature Request
## Package name
Apollo
### Description
Grafoo should be listed as an alternative to Relay and Apollo.
Answers:
username_1: Thanks for the suggestion. However, it does not seem like `Grafoo` meets the following criteria –
`Package has at least 1000 weekly downloads on NPM or is relatively popular on GitHub.`
This does not mean `Grafoo` isn't a good library, but just that we plan to only offer well-known, maintained, and relevant libraries as suggestions in Bundlephobia.
Status: Issue closed
|
infection/infection | 278729021 | Title: Use existing coverage report
Question:
username_0: Unless I missed it, right now are running the tests which requires xdebug or phpdbg to get the coverage data to then be able to do our own stuff.
I think a typical user would already have tests with coverage somehow, so IMO it would make much more sense to provide the data coverage files which when done so would avoid us to have to run the tests once.
Answers:
username_1: Do you suggest to have an option for providing code coverage data for Infection?
I'm not sure skipping the "Initial test suite run" step is a good idea.
1. We must run the whole test suite to make sure all tests are in a passing state. Without it, we can't guarantee the correctness of MSI.
2. When user changes the code and/or test, we must to re-generate code coverage.
I'm not sure I got your right though.
username_0: I think we're just describing different workflows.
Your workflow fits very well in the case where you run your tests without coverage for quick feedback and then run Infection to get a detailed report.
Another workflow would be to run the tests with coverage and then Infection for the report.
In the second case, running the tests again with coverage is a waste of time. And a use case for that second case is for example if you have to do a weird workaround for running the tests with coverage, if you want to display multiple coverage reports (XML + Text for example) or send the coverage report to another tool.
That said I don't think feature is a high priority one.
username_2: Currently, we're running our tests with coverage twice at CI, one with phpunit to get the reports, and another one with infection.
Each step takes a few minutes as my current project has a number of end to end (eg drilling through the whole symfony stack) tests that are very slow when xdebug and coverage are enabled. Non coverage phpunit runs finish within the second.
If infection could take a coverage report file from a previous phpunit run it would cut our build times down substantially. Infection's initial test run would still run, but without coverage.
Infection could then print a warning that using an external coverage file could give misleading results if stale. It would be the developer's responsibility to ensure that doesn't happen.
username_2: An alternative way that would help in my use case would be for infection to honor coverage report generation settings on phpunit.xml including location. This would perhaps be cleaner and safer.
username_3: This would cut down on my build times as well. I like the suggestion of using the `phpunit.xml` settings as it would allow Infection to be more easily integrated into a full CI setup without adding redundancy.
username_1: As for me, It would be confusing if I use e.g. `tmpDir=cache`, but coverage-xml is generated to another folder from `phpunit.xml`. But maybe I'm wrong.
Would option 1 work for you?
username_2: It wouldn't unfortunately as infection is very specific in requiring the reports it needs to function while stripping my own reports from phpunit.xml. Infection has its needs and other projects have others.
Infection also leaves behind a lot of temporary files that would clutter up my reports.
username_1: exactly.
So,
1. if you want infection to use existing reports and skip coverage during Initial Test Run - reports should be `junit` and `xml`.
2. if you want infection to generate coverage to the folder from `phpunit.xml` - infection will be able to generate only `junit` and `xml`, but not html and others
If none of these options suit your needs - it's not clear how to proceed.
username_2: Option 1 would totally work I reckon, if we can point infection to the location of them reports
username_1: Then let's do it for 0.8.0 :) I will pick it up.
username_2: 👍👍👍
username_4: Another side effect of this is that we don't **need** the user to have Xdebug enabled or run phpdbg, if they have a pre-existing coverage report
Status: Issue closed
username_1: @username_3 @username_2 implemented. Feel free to check it in master, not sure when it will be released though.
Example how to use it:
https://github.com/infection/infection/blob/1966cbb703cc21cc2d39d4bfc3d0231ae3712ca3/.travis.yml#L27-L28
Documentation: https://github.com/infection/site/blob/master/src/guide/command-line-options.md#--coverage
username_2: Understood, I'll give this a spin as soon as I get back to work in the
morning 👍--
Kind regards,
<NAME>on
username_3: This part works perfectly for me when I use phpdbg. I do run into the #182 issue when I don't run Infection through phpdbg, but it looks as though the fix for that has not been merged yet.
Thanks for all the work on this :+1: |
pingcap/community | 973206660 | Title: Tracking issue for reviewing community content
Question:
username_0: As time goes by, this repository generated lots of content, including:
* Event: Hacktoberfest, bug-hunting-programs, challenge-programs, etc.
* Governance: governance, special-interest-groups, toc, working-groups, architecture, etc.
* Incubator: graduation-proposal, rfc, etc.
* Contribution Guideline: contributors, learning-resources, become-a-committer.md, etc.
## Event
They are almost outdated. We can archive such content as https://github.com/kubernetes/community/tree/master/archive
## Governance
As mentioned at #516 , we are undergoing a governance reorganization. Legacy staff such as working groups, architecture should be updated. The staff itself, depend on whether they have valuable information, is deleted or archived as above.
## Incubator
Sorry I'm digging how incubator actually works.
## Contribution Guideline
We're writing [TiDB Dev Guide](https://github.com/pingcap/tidb-dev-guide) and will integrate content under "Contribute to TiDB" chapter.
Also several outdated content should be deleted such as those about legacy ti-srebot logic. |
mozilla/network-api | 217598267 | Title: Set up network-api templates -> mezzanine template deployments
Question:
username_0: This will rely on:
- [ ] converting pug to django templates, https://github.com/mozilla/network-api/issues/86
- [ ] making template conversion part of the network-api build
- [ ] sending the converted template(s) over to the mezzanine repository
- [ ] have the mezzanine repo automatically put them in the right place so that auto-deploy works
Status: Issue closed
Answers:
username_0: tracking on the `network` repo |
poymanov/tinysteps-app | 698281909 | Title: Преподаватель-Цель обучения - просмотр целей по преподавателю
Question:
username_0: ## Описание
Можно получить список целей преподавателя.
## Процесс
Пользователь с любыми правами (авторизованный или гость) может получить список целей, которые закреплены за преподавателем.
GET-запрос на `/teachers/goal/show/all/{teacher_id}`.
## Тесты
- ID преподавателя указан в неправильном формате;
- Указанный преподаватель не существует;
- Попытка просмотра целей для преподавателя, находящегося в архивном состоянии;
- Успешное получение списка целей обучения преподавателя;
- Получение пустого списка целей обучения преподавателя.<issue_closed>
Status: Issue closed |
ISPP/pet-care | 153600528 | Title: Añadir Social Buttons en el pie de la Masterpage
Question:
username_0: Hay algunos complementos para bootstrap de Social Buttons en este enlace , por si valen:
http://www.psdahtmlpasoapaso.com/blog/complementos-para-extender-bootstrap
Answers:
username_1: 
Que tal se ve?
username_1: 
Añadido varios botones de redes sociales, aun hay que crear esas redes sociales y cambiar la url en el boton. He añadido un correo de contacto, me parecía correcto ponerlo ahí.
Status: Issue closed
|
oracle/weblogic-deploy-tooling | 662180248 | Title: Generate unique JTA Migratable Target names in configured clusters
Question:
username_0: A while back, WDT was fixed so that it generates unique JTA Migratable Target names in a dynamic cluster, but it looks like the same fix is also needed for configured clusters.
Namely:
```
JTAMigratableTarget:
Cluster: "@@PROP:CLUSTER_NAME@@"
UserPreferredServer: "@@PROP:MANAGED_SERVER_NAME_BASE@@2"
Should be:
JTAMigratableTarget:
Name: "@@PROP:MANAGED_SERVER_NAME_BASE@@2"
Cluster: "@@PROP:CLUSTER_NAME@@"
UserPreferredServer: "@@PROP:MANAGED_SERVER_NAME_BASE@@2"
<NAME> < 1 minute ago
It looks like the WDT JTA MT name generation bug was fixed for dynamic clusters but not static clusters...
Answers:
username_1: Workaround suggested will not work as Name is not one of the attribute allowed in the JTAMigratable section-
Issue Log for createDomain version 1.9.1 running WebLogic version 192.168.127.12.0 offline mode:
SEVERE Messages:
1. WLSDPLY-05029: Name is not one of the attribute names allowed in model location topology:/Server/managed-server1/JTAMigratableTarget
2. WLSDPLY-05029: Name is not one of the attribute names allowed in model location topology:/Server/managed-server2/JTAMigratableTarget
3. WLSDPLY-05029: Name is not one of the attribute names allowed in model location topology:/Server/managed-server3/JTAMigratableTarget
4. WLSDPLY-20001: createDomain did not complete the operation because validation failed
Total: WARNING : 0 SEVERE : 4
Status: Issue closed
|
atom/settings-view | 161959557 | Title: Doc Update for enum 'label' display in settings?
Question:
username_0: In [the Config docs](https://atom.io/docs/api/v1.7.4/Config), the object enum array states the following:
```
config:
someSetting:
type: 'string'
default: 'foo'
enum: [
{value: 'foo', description: 'Foo mode. You want this.'}
{value: 'bar', description: 'Bar mode. Nobody wants that!'}
]
```
However, if I don't put a `label` field, the select fields are not labeled. Correct number of items though. By adding a label, the options are labeled correctly:
```
config:
someSetting:
type: 'string'
default: 'foo'
enum: [
{value: 'foo', label: 'foo', description: 'Foo mode. You want this.'}
{value: 'bar', label: 'bar', description: 'Bar mode. Nobody wants that!'}
]
```
I couldn't find where to change the docs after poking around. If this is merely a documentation change, if someone can point me to them, I'll update the example and description.
Answers:
username_1: Documentation about the `Config` can be found here: https://github.com/atom/atom/blob/v1.10.2/src/config.coffee#L17-L342
username_2: @username_1 This issue is about the actual documentation here: https://atom.io/docs/api/v1.10.2/Config#enum
username_1: @username_2 To my understanding has https://github.com/atom/atom/blob/v1.10.2/src/config.coffee#L240-L285 the original documentation, which is extracted to https://atom.io/docs/api/v1.10.2/Config#enum
The user above asked "... where to change the docs...", which I delivered. Please correct me if I'm wrong.
username_1: Anyways, a proposed docs update was already denied: https://github.com/atom/atom/pull/11973
username_3: The top setting works fine here - we have no documentation for a label property because I don't believe we ever added one. The code certainly doesn't have any such reference.
The default Atom config at https://github.com/atom/atom/blob/master/src/config-schema.coffee also doesn't use a label property on enums.
I guess the real question is why some people are needing a label - it's certainly doesn't appear by design.
username_2: @username_3 Around when this was filed (+/- a month?) there was a bug where `enum` of objects like was shown in the documentation wouldn't show at all... just as blank entries. I'm pretty sure it was fixed, but haven't verified myself as none of my packages have needed it (yet). |
waylaidwanderer/scarvesandcoffee-scraper | 677973942 | Title: Suggestion: adding additional elements to the metadata json
Question:
username_0: Hello! Thank you so much for creating this script! I use calibre and the fanficfare plugin to download individual stories from scarvesandcoffee, but I would love to do a batch download like this.
Fanficfare also collects metadata on the completion status, tags (I think for S&C, it combines categories and characters), series and number in the series, and the number of chapters, in addition to the metadata that you have already included in the JSON (title, author, rating, reviews, summary, published date, updated date). I don't know much about javascript, so how difficult would it be to add these additional metadata fields to the fetch-stories-metadata.js?
Thanks for considering! |
unrealcv/unrealcv | 365176620 | Title: Build failed in Linux
Question:
username_0: - Operating System: Ubuntu 16.04
- UE4 Version: 4.20
- UnrealCV Version: unrealcv-master (https://github.com/unrealcv/unrealcv.git)
- Client (python2, 3 or matlab): Python 3.6
- Problem Description: unrealcv-master failed to build.
For any suggestion will be very grateful. Thanks.
[Log.txt](https://github.com/unrealcv/unrealcv/files/2431098/Log.txt)
[UBT-UE4Editor-Linux-Development.txt](https://github.com/unrealcv/unrealcv/files/2431099/UBT-UE4Editor-Linux-Development.txt)
Answers:
username_1: I was able to solve it with PR https://github.com/unrealcv/unrealcv/pull/146 on windows (crossed compiled the plugin to Linux) |
cakephp/cakephp | 962834935 | Title: Duplicate logs in Console Command with ConsoleLog
Question:
username_0: This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.10.0
* Platform and Target: Apache Mysql
### What you did
ConsoleCommand
```php
<?php
namespace App\Command;
use Cake\Console\Arguments;
use Cake\Console\Command;
use Cake\Console\ConsoleIo;
class HelloCommand extends Command
{
public function execute(Arguments $args, ConsoleIo $io)
{
$this->log('this log info hello', 'info');
}
}
```
Logging settings
```php
/*
* Configures logging options
*/
'Log' => [
'debug' => [
'className' => ConsoleLog::class,
'scopes' => false,
'levels' => ['notice', 'info', 'debug'],
],
'error' => [
'className' => ConsoleLog::class,
'scopes' => false,
'levels' => ['warning', 'error', 'critical', 'alert', 'emergency'],
],
// To enable this dedicated query log, you need set your datasource's log flag to true
'queries' => [
'className' => ConsoleLog::class,
'scopes' => ['queriesLog'],
],
],
```
### What happened
Duplicate logs are output when using $this->log or Log::info in Console Command with ConsoleLog class.
```sh
2021-08-06 15:17:51 Info: this log info hello
2021-08-06 15:17:51 Info: this log info hello
```
### What you expected to happen
This is probably because when the ConsoleLog class is used for logging, both the ‘stdout’ log setting and the ‘debug’ log setting are used for log output, resulting in duplicate log output.
If I use Log::drop(‘debug’) just before $this->log, the duplicate log is fixed.
```sh
2021-08-06 15:17:51 Info: this log info hello
```
Answers:
username_1: This is exactly what is happening. You have two loggers that receive the log, so you get two log messages output. If you don't want two log messages output, remove one of the loggers.
username_0: Is it recommended to use $io instead of $this->log if we want to output the log in Console Command?
username_1: It depends. If you want output to only go to stdout then using `$io` is the way to go. If however, you want output to go to file logs, or analog (or other loggers you have) then you should use `$this->log()`.
username_0: I'm not sure if this is the correct behavior, because the output of $this->log on Console Command is duplicated, but the output of $this->log on Controller is not.
username_1: It is correct. Commands run in a CLI environment which has stdout and stderr. A web server does not have the same output streams.
Status: Issue closed
|
ssbssa/heob | 316751197 | Title: Document how to create human-readable reports from leaks.xml
Question:
username_0: It's not possible to Select All + Copy heob's output in Qt Creator, and leaks.xml is very hard to read. There should be a (documented) way of creating human-readable reports from leaks.xml.
Answers:
username_1: Try do add -oleaks.html into the `Extra Arguments` field, and see if this leaks.html meets your needs.
username_1: No, they can't be configured.
If you don't need colors just use `-oleaks.txt`.
username_1: Oh, and in this output file you can show the leak contents as well with `-L1024`.
And if you have multiple leaks, and you want to see how their stacktraces might be related, try `-g2`.
Status: Issue closed
username_0: OK, thanks. :) Will try those out.
username_1: FYI, the HTML colors can now be configured with an external CSS file, thanks to #22. |
marmelab/react-admin | 657886679 | Title: Cannot read property 'ui' of undefined
Question:
username_0: I am integrating the react-admin in react boilerplate (**https://github.com/react-boilerplate/react-boilerplate** ).When trying to access resource. then getting error of


Answers:
username_1: Hi,
Thank you for opening this issue :pray:
It should be an issue with Redux. But I need to reproduce the problem you describe.
Could you provide a CodeSandbox?
As explained in the bug report template:
1. Please fork the following template on CodeSandbox : [React Admin Template](https://codesandbox.io/s/github/marmelab/react-admin/tree/master/examples/simple)
2. Reproduce your bug. Remember to give us the clearest and simplest example possible.
3. Reply to this issue with the link of your CodeSandbox, and extra information if it's necessary.
Regards,
Adrien
username_0: Thanks username_1, I have got the solution.
I was not passing the admin Reducer that's why the error was coming.
Status: Issue closed
username_2: what do you meam by passing the admin Reducer ? |
moment/luxon | 513157711 | Title: fromObject and toFormat("T") use different timezone when timezone is changed
Question:
username_0: We are currently relying on the property that
```
DateTime.fromObject({hour: 13, minute: 13}).toFormat("T") === "13:13"
```
is true in all situations (minus locale formatting for the separator). Is this a wrong assumption?
This is in a react-native context, and what happens is that when the app is open and the user changes the device's timezone and goes back to the app, the condition above doesn't hold. It seems that `fromObject` is using a different zone than what `toFormat("T")` is.
I can also confirm that `Settings.defaultZone` does not get changed when switching timezone in the app, so that value is not in sync with `new Date().getTimezoneOffset()` after the timezone has been changed.
Answers:
username_1: I'm not really clear on exactly what the scenario is here. Is this it:
```js
DateTime.fromObject({hour: 13, minute: 13}).toFormat("T") //=> "13:13"
var dt = DateTime.fromObject({hour: 13, minute: 13});
// change system zone
dt.toFormat("T") //=> 14:13
```
If that's not the scenario, can you lay out what happens in a similar format?
If that is the scenario, that's expected. Luxon's most important principle is that a DateTime instance represents a specific moment in time. When you change the system zone, the moment in time represented by that Luxon instance doesn't change. It is only now expressed in a different zone.
username_0: The scenario is this:
```
DateTime.fromObject({hour: 13, minute: 13}).toFormat("T") === "13:13"
// change system time zone
// this no longer is true
DateTime.fromObject({hour: 13, minute: 13}).toFormat("T") === "13:13"
```
In other words, I would expect `fromObject` to be effectful (it relies on the system time zone, obviously) but `toFormat` be a pure function with no side effects (it only depends on the given DateTime). It seems that in our case those two calls are using different time zone settings.
username_1: I was able to reproduce this, at least in Chrome. What happens is that Luxon caches `DateTimeFormat` instances and `DateTimeFormat` instances cache the zone, even when no zone is provided. Here's the equivalent when removing Luxon:
```js
var dtf = new Intl.DateTimeFormat("en", { hour: "numeric", minute: "numeric", hour12: false } );
dtf.format(new Date(2019, 10, 28, 13, 13)) //=> "13:13"
// change my zone
dtf.format(new Date(2019, 10, 28, 13, 13)) //=> "8:13"
```
Seems like the only way to prevent this from happening would be to not cache `DateTimeFormat` instances. However, I did that on purpose because it's a significant performance difference. You can see the commit where my benchmark showed an 85x speedup for `toFormat` when caching these instances: bb77d5e6b968b0a2cc1be72d295f40a43a6687fb.
This is a bug, but I'm not willing to drop 85x speed for fixing it. However, it does look like Chrome in particular may have sped up this operation since I wrote that (just now, I tried some quick benchmarking in v8), so possibly this cache can be removed if similar performance improvements are available in Firefox and Safari. I will do some more testing. In the meantime, you can clear the caches explicitly if you want:
```
Settings.clearCaches()
```
username_1: After some additional testing, I had found that contrary to my hope above, Chrome has not made this faster. Here is the difference between caching DateTimeFormat objects and not caching them:
```
DateTime#toFormat with macro x 221,088 ops/sec ±1.18% (93 runs sampled)
DateTime#toFormat with macro no cache x 1,115 ops/sec ±1.90% (92 runs sampled)
```
So that's a huge speedup that I would lose if I fixed this bug.
There might be more radical solutions, such as incorporating the offset in the cache key. I will think about it, but at this point, the most likely outcome here is `wontfix`
username_0: Thanks for taking a look at this!
For some reason calling `Settings.resetCaches()` does not work either. Might have something to do with how react-native caches modules. And anyhow we would like to not use that hammer approach here.
The reason why luxon is such a great library is that it has a more functional API to dates than moment.js. This behaviour goes against that philosophy and I would personally pay the performance penalty rather than have a stateful API.
A common real world use case for experiencing this bug is with mobile applications. You open the app, go to a plane and cross a timezone. You go back to the app and see that all times are off.
username_0: Is there also a cache for the DateTime's time zone because I get the same time zone after changing it?
```javascript
Date.fromObject({hour: 13, minute: 13}).zoneName === "Europe/Helsinki"
// change timezone
// still true
Date.fromObject({hour: 13, minute: 13}).zoneName === "Europe/Helsinki"
```
username_1: It's not a stateful API, just a bug. Almost every immutable library ever uses caching; in fact, one of the nice things about immutability is that you _can_ cache so aggressively, because nothing can change. Statefulness is about the API and its expectations, not the any private state underneath. Here it's just that the private state is managed wrong.
That you would pay a 100x performance penalty to fix that bug makes you an outlier, since I've never even seen this one before. I'm not denying it's a real problem, just saying that the cure is for most users worse than the disease; slowing the library down by two orders of magnitude would bring countless applications to their knees. One of the hard parts of maintaining a widely-used open source library is that you can't please everyone all the time.
But I did think of a better workaround:
```js
DateTime.fromObject({hour: 13, minute: 13, zone: DateTime.local().zoneName });
```
That works by paying the performance penalty on object creation. Since the zone to format in is now explicit, it's part of the cache key and everything works.
You can use that conveniently like this:
```js
DateTime.fromObjectLocal = function(o) {
var realO = Object.assign({}, o, { zone: DateTime.local().zoneName });
return DateTime.fromObject(realO);
};
```
(though note that in the upcoming 2.0, it's `fromObject({ hour: 13, minute: 13 }, { zone: ... })`)
Re, your second issue: I can't reproduce that, and I'm confident the code doesn't cache anything there. In fact, my workaround above wouldn't even work if that happened. The implementation of that is [here](https://github.com/moment/luxon/blob/master/src/zones/localZone.js#L27); you can see it's constructing a new Intl.DateTimeFormat object to check what zone it resolves. So you must have something else going on there?
Status: Issue closed
|
kean/Nuke | 367579452 | Title: Nuke is not allowing SSL pinning.
Question:
username_0: Is there a way to access URLSessionDelegate methods to perform SSL pinning? I am in a situation where I have add custom SSL certificate but I am not able do it with current Nuke architecture. Thank you.
Answers:
username_1: Hey, @username_0. Existing `DataLoader` is designed to be a simple solution for most basic cases. If you need some more advanced features like SSL pinning, you have at least two options:
- Use [Nuke with Alamofire](https://github.com/username_1/Nuke-Alamofire-Plugin)
- Implement your own custom loader and make it conform to `DataLoading` protocol
username_2: @username_0 I had the same issue. I solved it writing custom `DataLoading` class as @username_1 recommended. Below is the working code:
```
private class ImageDataLoader: Nuke.DataLoading {
let manager: Alamofire.SessionManager
init() {
let servertTrustPolicies: [String: ServerTrustPolicy] = [
"add you server address here": .disableEvaluation
]
let configuration: URLSessionConfiguration = URLSessionConfiguration.default
configuration.httpAdditionalHeaders = Alamofire.SessionManager.defaultHTTPHeaders
let policyManager = ServerTrustPolicyManager(policies: servertTrustPolicies)
let manager = Alamofire.SessionManager(configuration: configuration, serverTrustPolicyManager: policyManager)
self.manager = manager
}
// MARK: DataLoading
/// Loads data using Alamofire.SessionManager.
func loadData(with request: URLRequest, didReceiveData: @escaping (Data, URLResponse) -> Void, completion: @escaping (Error?) -> Void) -> Cancellable {
// Alamofire.SessionManager automatically starts requests as soon as they are created (see `startRequestsImmediately`)
let task = self.manager.request(request)
task.stream { [weak task] data in
guard let response = task?.response else { return } // Never nil
didReceiveData(data, response)
}
task.response { response in
completion(response.error)
}
return task
}
}
struct ImageLoader {
static let `default` = ImageLoader()
private init() {
let pipeline = ImagePipeline {
$0.dataLoader = ImageDataLoader()
}
Nuke.ImagePipeline.shared = pipeline
}
@discardableResult
func loadImage(url: URL?,
options: Nuke.ImageLoadingOptions = Nuke.ImageLoadingOptions.shared,
into view: Nuke.ImageDisplayingView,
progress: Nuke.ImageTask.ProgressHandler? = nil,
completion: Nuke.ImageTask.Completion? = nil) -> Nuke.ImageTask? {
if let url = url {
return Nuke.loadImage(with: url, options: options, into: view, progress: progress, completion: completion)
} else {
view.display(image: options.placeholder)
return nil
}
}
}
```
username_1: Thanks, @username_2!
I forgot to mention, there also an entry in [Third Party Libraries Guide](https://github.com/username_1/Nuke/blob/master/Documentation/Guides/Third%20Party%20Libraries.md#using-other-networking-libraries) about implementing custom data loaders.
It seems that SSL pinning is a frequently requested feature, I'm going to add it to backlog and see whether it feasible to make the build-in data loader to support it without making it to complex. The idea behind `DataLoading` protocol was that there is a very good chance that if someone need a relatively complex feature like that, they probably already have their own custom data layer anyway. Or they are using Alamofire.
username_0: @username_1 Sorry for the delayed response. Thank you for all information. Would be nice to see SSL pinning inside Nuke in future.
@username_2 Thank you for sharing example code. it is really helpful.
Status: Issue closed
|
konsoletyper/teavm | 67435592 | Title: TimeZone support
Question:
username_0: I am working on adding timezone support. I have added a few classes (e.g. TimeZone, SimpleTimeZone, etc..) and a few methods to Calendar, GregorianCalendar, etc.. to add timezone support.
I have also added a couple of native Javascript methods to the runtime which are used by TimeZone. Currently I only support two timezones:
1. The current timezone of the browser (I have given this timezoneID "LOCAL" for lack of a better idea).
2. GMT
The portions that I have added to the runtime.js file are:
~~~~
function $rt_getTimezoneId(){
return "LOCAL";
}
function $rt_getTimezoneOffset(name, year, month, day, timeOfDayMillis){
if ($rt_getTimezoneId()===name){
var hours = Math.floor(timeOfDayMillis/1000/60/60);
var minutes = Math.floor(timeOfDayMillis/1000/60)%60;
var seconds = Math.floor(timeOfDayMillis/1000)%60;
var millis = timeOfDayMillis % 1000;
return -(new Date(year, month, day, hours, minutes, seconds, millis).getTimezoneOffset()*1000*60);
} else if ("UTC"===name || "GMT"===name){
return 0;
} else {
throw new Error("Unsupported Timezone: "+name);
}
}
function $rt_getTimezoneRawOffset(name){
if ($rt_getTimezoneId()===name){
var millis = new Date().getTime();
var i=0;
var addDays = 1000 * 60 *60 * 24 * 200; // Check 200 days later
while ($rt_isTimezoneDST(name, millis) && i++<4){
millis += addDays;
}
return -(new Date(millis).getTimezoneOffset()*1000*60);
} else if (name==='GMT' || name==='UTC'){
return 0;
} else {
throw new Error("Unsupported Timezone: "+name);
}
}
function $rt_isTimezoneDST(name, millis){
if ($rt_getTimezoneId()===name){
var jan = new Date(this.getFullYear(), 0, 1);
var jul = new Date(this.getFullYear(), 6, 1);
var maxOff = Math.max(jan.getTimezoneOffset(), jul.getTimezoneOffset());
return new Date(millis).getTimezoneOffset()<maxOff;
} else if (name==='GMT' || name=='UTC'){
return false;
} else {
throw new Error("Unsupported Timezone: "+name);
}
}
~~~~
And these are used within the Timezone class as follows:
~~~~
@JSBody(params={},
script="return $rt_getTimezoneId(name,year,month,day,timeOfDayMillis)"
)
[Truncated]
@JSBody(params="name",
script="return $rt_getTimezoneRawOffset(name)")
private static native int getTimezoneRawOffset(String name);
@JSBody(params={"name","millis"}, script="return $rt_isTimezoneDST(name,millis)")
private static native boolean isTimezoneDST(String name, long millis);
~~~~
Support for other timezones could easily be added at the javascript level by simply overriding the 4 $rt timezone functions and employing a 3rd party JS library. I have found a few:
1. https://github.com/mde/timezone-js
2. https://github.com/dbaron/tz.js
3. https://github.com/sproutsocial/walltime-js
I'm not sure whether it makes sense to incorporate full timezone support into TeaVM since the browser doesn't do it natively. Do you have any suggestions or preferences on how you would prefer it included.
I have updated the existing DateTime tests to pass (they didn't previously take into account DST). I was about to write some TimeZone tests but without support for any objective timezones except for GMT, much of it would be moot. I suppose I could embed support for just a couple of timezones - just enough to write tests.
I'll be posting a branch with the changes shortly.
Answers:
username_1: I made attempt to add timezone support some time ago, but soon felt pain, thanks to JavaScript with its 'powerful' API. To support timezones, one needs to include entire tzdata into `classes.js`. One of the goals of TeaVM is the size of the generated file, so I don't really like the idea. Do you really need timezone support?
Also, I don't understant why are you including timezone functions into `runtime.js` file. Can't you rewrite these functions in pure Java, using JSO and DOM where needed?
username_0: Yes. That's what I figured (not including in classes.js).
I wanted to implement those core methods in runtime.js to make it easy to add support for other timezones. For example, I can easily override these methods to add support for other timezones using another Javascript library if I want to.
username_0: Here is the proposed modification.
https://github.com/username_1/teavm/pull/95
username_0: "I wanted to implement those core methods in runtime.js to make it easy to add support for other timezones."
Although I suppose I could also implement a similar mechanism in Java. Let me look at this some more and seem if I can remove the javascript stuff.
username_0: I have removed the changes to runtime.js and moved everything into Java. The pull request (#95) includes the changes. I also added tests for TimeZone and Calendar from Apache harmony. Will be adding the GregorianCalendar test shortly as well.
username_0: Turns out the tests weren't passing in the browser. I have made some changes to try to get styles complying with TeaVM conventions. The latest changes also fix a bug with Date related to the "year" parameters in the constructors and the getYear() and setYear() methods. (Java Date handles "years" differently than Javascript Date objects do).
The TimeZone tests are now passing in the browser. I still need to get the Calendar test passing in the browser before the pull is ready to be merged.
username_0: I'm at the point where there are only a couple of tests still failing in the Calendar test and they are for features that are more obscure / potentially not needed. I need to push forward with other aspects now and don't have time to sort out these tests right now. Would you prefer I comment out the tests that fail, or leave them in, failing, so that someone else (or me later) can identify and fix them?
username_1: There is TZ database parser in JodaTime, which is under Apache 2.0 license. So we could borrow this parser to convert TZ database into some binary format, filter it by date (say, exclude everything earlier 1980, or as configured by user), and write this binary TZ database into `classes.js`. Then we could take another portion of JodaTime to calculate timezone offsets. There is already some code dealing with CLDR, so it will be easy to include support for timezone names. Please, don't forget that including `TimeZone` class **is not** timezone support. There are timezones in at least `Calendar` and `DateFormat`.
I don't like the idea of including timezones into `teavm-plaform` nor `runtime.js`. There are another ways of making extensible logic, see [ServiceLoader](http://docs.oracle.com/javase/7/docs/api/java/util/ServiceLoader.html).
username_0: I needed timezone support for Codename One, so our build server is using the add_timezone branch into which I continually merge the latest master when I update.
https://github.com/username_0/teavm/tree/add_timezone
It includes necessary modifications to Calendar and DateFormat. My strategy is to not include any particular timezones, but to make it pluggable to be able to add timezones easily. It includes a "special" timezone that I have called "Local" which is just the current browser timezone. GMT timezones also work (e.g. if you specify a timezone relative to GMT).
If we embedded timezone offsets into classes.js we could use the infrastructure I have set up here for registering for using the timezones quite easily. If you check the tests I have added you can see how it works.
This branch also includes some important fixes to the Date class as the "year" parameter in the constructor and getYear() method of the Java Date class are handled differently than their counterparts on the Javascript Date class. If you don't want to include all of these changes, then I can submit a separate pull request to fix just the bug.
username_1: Please, check out my `timezones` branch. It's almost complete. This implementation embeds very small (about 50kb) packaged IANA timezone database into `classes.js` together with parts of modified Joda Time. Also, I implemented timezone detection algorithm, which tracks timezone history. Unfortunately, this algoritm is rather slow (about 350ms on my machine, and 60ms in warmed up VM).
username_0: Hi Alexey,
I finally got around to testing this out. It satisfies the initial needs I had for building some apps in CN1 that require timezones (i.e. they didn't build before because of missing classes or methods, but now they do).
I haven't done any explicit testing of Timezone stuff other than observing that projects that used the Timezone APIs appear to work correctly.
Status: Issue closed
|
willthames/photodeck.lrdevplugin | 405958522 | Title: Current Version
Question:
username_0: This version appears to be 0.15 against 0.16.2 downloaded from Photodeck. Any chance we can get the latest version here please.
Many thanks
Status: Issue closed
Answers:
username_1: Justed push to Will's repository the latest changes.
username_0: Issue appears to be with info.lua. If I leave the compiled version of this file in place and copy over the rest Lightroom accepts the plugin
username_0: Found it:
LrToolkitIdentifier = 'com.photodeck.lightroom-publish'
Sorry for the noise
username_1: Hi Kim,
Yes, Will's plugin is in a different name space.
Got a word that you have added manual sorting to your repo. Care to share your work?
Cheers
username_0: For sure. Two files need adding to: PhotodeckAPI.lua and PhotoDeckPublishServiceProvider.lua
PhotodeckAPI.lua:
function PhotoDeckAPI.imposeSortOrder( urlname, galleryId, remoteIdSequence )
logger:trace(string.format('PhotoDeckAPI.imposeSortOrder("%s", "%s" )', urlname, galleryId))
local seq = ""
for k,v in pairs(remoteIdSequence) do
logger:trace( string.format( "id: %s", v ))
if k == 1 then
seq = v
else
seq = seq .. "," .. v
end
end
local galleryInfo = {}
galleryInfo['gallery[medias_order]'] = seq
galleryInfo['gallery[content_order]'] = 'manual-first'
local response, error_msg = PhotoDeckAPI.request('PUT', '/websites/' .. urlname .. '/galleries/' .. galleryId .. '.xml', galleryInfo)
if error_msg then
logger:trace( string.format( "PhotoDeckAPI.imposeSortOrder() failed: %s", error_msg ) )
return false
end
return true
end
and PhotoDeckPublishServiceProvider.lua:
publishServiceProvider.supportsCustomSortOrder = true -- this must be set for ordering
publishServiceProvider.imposeSortOrderOnPublishedCollection = function( publishSettings, info, remoteIdSequence )
local urlname = publishSettings.websiteChosen
local galleryId = info.remoteCollectionId
return PhotoDeckAPI.imposeSortOrder( urlname, galleryId, remoteIdSequence )
end
username_1: Commit <PASSWORD>, based on your contribution.
Thanks! |
dotnet/maui | 920550858 | Title: [Spec] Implement Device class functionality to Core
Question:
username_0: # [The feature]
The Device class contains a number of properties and methods to help developers customize layout and functionality on a per-platform basis.
- **Flags** - Used to use experimental APIs in Xamarin.Forms. Obsolete.
- **SetFlags** - Remove.
- **FlowDirection** -Move to Window.
- **SetFlowDirection** - Remove.
- **Idiom** - Move to MauiContext and make it use essentials.
- **Info** - Obsolete.
- **SetIdiom** - Remove.
- **SetTargetIdiom** - Remove.
- **IsInvokeRequired** - obsolete (hidden)
- **PlatformInvalidator** - Try to remove.
- **Invalidate** - Try to remove.
- **RuntimePlatform** - Make it use essentials and move these to Window, use Application.Windows.First() to back these.
- **BeginInvokeOnMainThread**(Action)
- **GetNamedColor**(string) - Move to .NET MAUI Graphics.
- **GetNamedSize** (multiple overloads) - TODO find the 'current' way to do this.
- **StartTimer** - Move to Application or Window? Maybe this goes away and is handled by animation stuffPlatformServices
- **RequestedTheme** - Move to Window, use Essentials implementation.
- **RuntimePlatform** - Remove.
- **GetHash** - Remove.
- **GetMD5Hash** - Remove.
- **GetNamedColor** - Remove.
- **GetNamedSize** - Remove.
- **GetNativeSize** - Gets renderer or handler and calls GetDesiredSize - remove.
- **GetStreamAsync** - Remove.
- **GetUserStoreForApplication** - Remove.
- **OpenUriAction** - Obsolete, use essentials.
- **QuitApplication** - Move to Application.
- **StartTimer** - Move to window? Maybe this goes away and is handled by animation stuff
- **Dispatcher** - find window of mainpage and use that dispatcher.
- **MainPage** / CreateWindowMake CreateWindow return MainPage, if no mainpage is set throwSet MainPage, go through windows to find window with current mainpage, and replace mainpage.
# Difficulty : [medium]
Answers:
username_1: Duplicate of https://github.com/dotnet/maui/issues/1965
Status: Issue closed
|
YummYume/Russia-2.0 | 566522852 | Title: Default Industrial Path - 1 Week
Question:
username_0: Basic Industrial Focuses (needed to complete the 5 Year Plan).
Focuses : The Second 5 Year Plan, The Motherland, Improve Russia, Expand the Civilian Industry in Moscow, Keep Expanding the Civilian Industry all over the Motherland, Armement Effort, Expand the Armement Effort, Heavy Industry, Finish the 5 Year Plan.<issue_closed>
Status: Issue closed |
JasonRivers/Docker-Nagios | 235794532 | Title: how to reload nagios ?
Question:
username_0: As I understand the only way to apply changes is to restart docker container?
IS there any other options to reload nagios?
Answers:
username_1: There is a way to do this from within the web panel for nagios.
On the left hand menu select "Process Info" (near the bottom under System) and then select "Restart the Nagios process"
This will trigger nagios to stop and start reloading any new configuration, - be aware that if there are errors in your configuration then the nagios process will fail to restart, it's a good idea to check that the config is OK using:
```
nagios -v /opt/nagios/etc/nagios.cfg
```
username_0: Thank you.
Exectly what I need.
Get Outlook for Android<https://aka.ms/ghei36>
Status: Issue closed
username_2: When I tried to restart I got following error
Error: Could not open command file '/opt/nagios/var/rw/nagios.cmd' for update!
The permissions on the external command file and/or directory may be incorrect. Read the FAQs on how to setup proper permissions.
An error occurred while attempting to commit your command for processing. |
appium/appium | 41899910 | Title: Always got (Original error: spawn ENOENT) after updated to V1.2.2
Question:
username_0: My python scripts work fine before V1.2.2. But I always got (Original error: spawn ENOENT) after updated to V1.2.2. Here is the traceback, below:
Traceback (most recent call last):
File "/Users/yuan/Desktop/fotor-test/Fotor_Test.py", line 33, in setUp
self.driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
File "/Library/Python/2.7/site-packages/appium/webdriver/webdriver.py", line 35, in **init**
super(WebDriver, self).**init**(command_executor, desired_capabilities, browser_profile, proxy, keep_alive)
File "/Library/Python/2.7/site-packages/selenium-2.42.1-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 73, in **init**
self.start_session(desired_capabilities, browser_profile)
File "/Library/Python/2.7/site-packages/selenium-2.42.1-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 121, in start_session
'desiredCapabilities': desired_capabilities,
File "/Library/Python/2.7/site-packages/selenium-2.42.1-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 173, in execute
self.error_handler.check_response(response)
File "/Library/Python/2.7/site-packages/selenium-2.42.1-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 164, in check_response
raise exception_class(message, screen, stacktrace)
WebDriverException: Message: u'A new session could not be created. (Original error: spawn ENOENT)'
I'm not sure is this a bug in V1.2.2 or something wrong I've done.
p.s. Nothing wrong happened with the same scripts in V1.2.0 and V1.2.1. |
CMSCompOps/WmAgentScripts | 893265577 | Title: Increase the stepchain conversion threshold.
Question:
username_0: **Impact of the new feature**
Stepchain workflows
**Is your feature request related to a problem? Please describe.**
Described #827
**Describe the solution you'd like**
Current threshold is 75%. To get an immediate effect, the threshold should be ~90%
**Describe alternatives you've considered**
Other ideas are listed #827
**Additional context**
@z4027163<issue_closed>
Status: Issue closed |
google/closure-templates | 128572941 | Title: What is alternative for deprecated-noautoescape?
Question:
username_0: Is there any way to directly insert an html to template, working with soy templates 2016-01-12 version?
In old versions (2012-12-21 for example) we were able to use autoescape="false".
In newer versions (to 2015-04-10) we were able to use autoescape="deprecated-noautoescape".
In 2016-01-12 this deprecated API was removed, and I can't find alternative for deprecated-noautoescape (I was looking for it there https://developers.google.com/closure/templates/docs/security)
Answers:
username_1: where is the html coming from?
username_0: Html coming from trusted source (rendering dynamically)
username_2: In Java or JavaScript?
username_0: In Java
username_1: Then you can pass it using one of the helpers in
com.google.template.soy.data to construct a SanitizedContent object with
ContentKind.HTML SanitizedContents or UnsafeSanitizedContentOrdainer.
Status: Issue closed
|
outercloudstudio/Ike-And-Liams-s-Game | 614961923 | Title: Does your youtube video not have sound intentionally, or what?
Question:
username_0: cuz i will like it better if it had sound...
Answers:
username_0: cuz i will like it better if it had sound...
username_1: oh sorry
username_0: Plus, I know how to use a module, but what are you going to do with one that says "hoi" and I don't even know about the other one...
username_1: Basically:
Only You will work on Game.py
I will work on the src.py
TO MAKE A CHANGE
-do change
-commit
-push to origin
TO UPDATE THE FILES
-refresh origin
-pull from origin
To use functions form src.py just in the game .py do src.FUNCTION
I will create functions for creating random numbers and stuff to make it easier.
Status: Issue closed
username_0: what does src.py do? |
MicrosoftDocs/azure-docs | 657312637 | Title: Error following the steps
Question:
username_0: ### Step
az feature register --namespace "Microsoft.ContainerService" --name "EncryptionAtHost"
### Result
Once the feature 'EncryptionAtHost' is registered, invoking 'az provider register -n Microsoft.ContainerService' is required to get the change propagated
**The feature 'EncryptionAtHost' could not be found.**
### az version
az --version
azure-cli 2.9.0
command-modules-nspkg 2.0.3
core 2.9.0
nspkg 3.0.4
telemetry 1.0.4
Extensions:
aks-preview 0.4.55
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7b20ff19-2e1e-941c-2dfc-ea5456b081ef
* Version Independent ID: 6958b886-122d-0fce-3f77-d0c74944fb0e
* Content: [Enable host-based encryption on Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/enable-host-encryption)
* Content Source: [articles/aks/enable-host-encryption.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/enable-host-encryption.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
Answers:
username_1: @username_0, Thanks for the question! We are investigating and will update you shortly.
Status: Issue closed
username_2: @username_0 Thanks for pointing this out.
We need to `EnableEncryptionAtHostPreview` feature flag under `Microsoft.ContainerService`.
Please use below command for enabling this:
`az feature register --namespace "Microsoft.ContainerService" --name "EnableEncryptionAtHostPreview"`
`az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableEncryptionAtHostPreview')].{Name:name,State:properties.state}"`
The document is updated to reflect these changes. Changes will go live soon.
username_3: Thank you Vikas |
Belithe/ProjectPeriode3 | 579857817 | Title: Project Opdracht 3, Variant D
Question:
username_0: Zoals wellicht bekend, is een verkoper van dranken in Nederland verplicht om de BTW, die in
rekening is gebracht, af te dragen aan de Belastingdienst.
Aan het einde van elk kwartaal dient vanuit de applicatie inzichtelijk gemaakt te worden hoeveel
BTW er dient te worden afgedragen.
Opdracht: Maak een formulier waarbij na selectie van een kwartaal het volgende wordt berekend:
Keuze kwartaal (Q1, Q2, Q3, Q4): << selectie kwartaal >>
Kwartaal loopt van: _________________ tot :
Totale afdracht BTW laag tarief 6%:
Totale afdracht BTW Hoog tarief 21%:
Totale afdracht BTW: (totaal van bovenstaande twee
bedragen).
Waarvoor geldt dat alle niet-alcoholhoudende dranken worden belast tegen laag tarief ( en alle
alcoholhoudende dranken tegen hoog tarief (21%)<issue_closed>
Status: Issue closed |
jertel/vuegraf | 1005931280 | Title: Already Exsisiting Database
Question:
username_0: Can't use with database that already exists. I commented the createdatabase function and it worked I'm not very good at python so I'm not sure how to do the if/else logic from the json config.
Answers:
username_1: Please see https://github.com/username_1/vuegraf/issues/11 and provide more information, such as redacted configuration and logs showing the errors.
Issues are intended for well-documented, reproducible problems. If you're looking for support use the Discussions.
Status: Issue closed
|
quarkusio/quarkus | 433378781 | Title: Finalize multiple datasource support for Agroal
Question:
username_0: @username_2 offered to take a look at this one so dumping what I have in mind.
### Move named datasources to `.named` config namespace
Right now, named datasources are implemented like this:
```
quarkus.datasource.datasource2.driver=org.h2.Driver
quarkus.datasource.datasource2.url=jdbc:h2:tcp://localhost/mem:datasource2
quarkus.datasource.datasource2.username=username2
quarkus.datasource.datasource2.min-size=2
quarkus.datasource.datasource2.max-size=12
```
The issue is that the datasource name could conflict with current or future Agroal properties (current is not really an issue as you will have an error but future is really a problem).
Moving to something like that would help:
```
quarkus.datasource.named.datasource2.driver=org.h2.Driver
quarkus.datasource.named.datasource2.url=jdbc:h2:tcp://localhost/mem:datasource2
quarkus.datasource.named.datasource2.username=username2
quarkus.datasource.named.datasource2.min-size=2
quarkus.datasource.named.datasource2.max-size=12
```
### Check if `MultipleDataSourcesConfigTest` can be enabled again
We had to disable this test because it was not possible to have several tests with different configurations: the configuration was cached somehow.
It's possible that David fixed it already, let's see if it's fixed.
### Get rid of the `@DataSource` qualifier
Right now, it's used a a poor man's `@Named` as I wasn't able to make `@Named` work properly with a default datasource being unnamed (and for which I don't want a `@Named` to be necessary).
Let's see if we can fix that.
Answers:
username_1: I'll have to look into why this was happening. I recall a class loading related issue with config caches but I thought I addressed the problem at least once. Given the big changes over the last couple of weeks, it might not be fully solved.
username_0: Ok, that's good to know.
So maybe we can drop this part for now and work on the rest, @username_2 .
username_2: Okay, sounds to me like what we want to do here now is figure out the bean resolution with datasources, correct?
username_0: Yeah, mostly see if we can do better than having a specific qualifier.
Status: Issue closed
|
kubernetes/autoscaler | 492382633 | Title: Azure CA Initialization Error - Unsupported Instance Type
Question:
username_0: Hello,
The cluster autoscaler fails to initialize when started with a VMSS with zero instances. It continuously outputs the following error:
utils.go:318] Unable to build proper template node for xxxxxxxxxxxxxxxx: instance type "Standard_NV6_Promo" not supported
My guess is that this is because the instance type Standard_NV6_Promo is not in the map defined in azure_instance_types.go. Is this correct? Is there a workaround for this besides changing my instance type?
Answers:
username_1: /kind bug
/area provider/azure
username_1: /assign @nilo19 |
UglyToad/PdfPig | 574229572 | Title: `Expected name as dictionary key` Exception
Question:
username_0: Hi we've seen the above exception in our application logs unable to reproduce atm.
```
_UglyToad.PdfPig.Exceptions.PdfDocumentFormatException: Expected name as dictionary key, instead got: SAPinfoStart TOA_DARA at UglyToad.PdfPig.Tokenization.DictionaryTokenizer.ConvertToDictionary(IReadOnlyList`1 tokens) at UglyToad.PdfPig.Tokenization.DictionaryTokenizer.TryTokenize(Byte currentByte, IInputBytes inputBytes, IToken& token) at UglyToad.PdfPig.Tokenization.Scanner.CoreTokenScanner.MoveNext() at UglyToad.PdfPig.Tokenization.Scanner.PdfTokenScanner.MoveNext() at UglyToad.PdfPig.Tokenization.Scanner.PdfTokenScanner.Get(IndirectReference reference) at UglyToad.PdfPig.Parser.Parts.DirectObjectFinder.Get[T](IndirectReference reference, IPdfTokenScanner scanner) at UglyToad.PdfPig.Parser.DocumentInformationFactory.Create(IPdfTokenScanner pdfTokenScanner, TrailerDictionary trailer) at UglyToad.PdfPig.Parser.PdfDocumentFactory.OpenDocument(IInputBytes inputBytes, ISeekableTokenScanner scanner, IContainer container, Boolean isLenientParsing, String password) at UglyToad.PdfPig.Parser.PdfDocumentFactory.Open(IInputBytes inputBytes, ParsingOptions options) at document.packs.bff.Utils.PdfFileUtils.GetPdfProperties(IFormFile file)
```
Answers:
username_1: Hi there, sorry to hear you're running into this error. I thought the `SAPinfoStart` might be relevant/a clue to the document producer so I googled it and this thread seems related https://answers.sap.com/answers/11694119/view.html
The error might be in:
```
/Producer (SAP NetWeaver 700 )
%SAPinfoStart TOA_DARA
```
The comment in the dictionary should already be handled but I'll add a test around it to confirm, if not that's likely to be the bug.
username_1: I've checked this works and it seems it was fixed in version 0.1.0. If you're running 0.1.0 now you shouldn't see this error again. Let me know if you want to keep this issue open in case you find a document that reproduces the error.
username_0: Thanks @username_1 we're not running 0.1.0 will update to that version & create another issue if we see it again.
Status: Issue closed
|
bitcoin-s/bitcoin-s | 781324883 | Title: Add WalletSync documentation similar to ChainSync
Question:
username_0: In #2461 i added `WalletSync`.
We have a nice example of how to use `ChainSync` to sync your chainstate with an arbitrary data source, we should document the same with the wallet.
https://bitcoin-s.org/docs/next/chain/filter-sync#syncing-block-filters-against-bitcoind<issue_closed>
Status: Issue closed |
bieniu/ha-shellies-discovery | 800749105 | Title: Shelly i3 shortpush flooding my history
Question:
username_0: **Describe the bug**
My history is flooded with i3 short-push messages when I'm not even using it.
**Expected behaviour**
Show entries when short-push is really triggered.
**Versions:**
- Home Assistant: 2021.2.0
- Shellies Discovery: 0.37.0
- Shelly device firmware: 20201124-092930/v1.9.0@57ac4ad8
**Shellies Discovery automation:**
Not sure what should I provide here. Sorry.
**Debug log:**
No log but history is full off:

Status: Issue closed
Answers:
username_1: Take a look here https://github.com/username_1/ha-shellies-discovery/issues/91
The problem has been reported to Shelly, I don't know when it will be fixed

username_0: Thanks. |
GothenburgBitFactory/taskwarrior | 296746916 | Title: [TW-754] deleted and completed tasks need color rules, too
Question:
username_0: _<NAME> on 2009-12-04T23:07:26Z says:_
status:completed tasks are currently almost never seen, as (I think all but "completed") reports filter them out, but being able to view those tasks, in mixed company with status:pending tasks, would actually be very useful in the analysis of project history, for example. If completed tasks can be viewed, especially when mixed with pending tasks, then I think users should be able to control their colors.
the use of "-Z" is ambiguous with the removal of the "Z" tag.
This would have to be something like "--Z" instead,
if command line options are to be supported at all.
We only discussed the use of command-line options very briefly. I know that neither you nor Fredde are particularly hot on the idea, but I hope we can stay open to it a while longer.
With regards to ambiguity, I think we established that if a command-line "switch" could only be used immediately following the "task" command,
that there could be no confusion.
One objection to the use of cli options is that a new lot of cryptic things need to be memorized. I think this is not really the case here, as we are building an interactive interface with a great lot of single-key commands. The user will have on-screen reference for these commands, along with a more detailed reference "card" available. If command line switches were ALWAYS the direct cli counterpart to the interactive key presses, then the use in one mode only re-enforces the understanding of the other. Once you are comfortable with one, the other will come naturally, and fluid use of either mode will result.
cli-options would result in;
* much stronger parallels between cli and interactive usage
* dramatically reduced key-strokes, for those that use them
* invoke behavior with two characters, that could be done otherwise, but would require more complex query construction
* nothing taken away from existing cli usage
* no new actions, just another way to perform existing ones
did I mention the key-strokes ?
[this was copied out of bug #317]
Status: Issue closed
Answers:
username_0: Migrated metadata:
```
Created: 2009-12-04T23:07:26Z
Modified: 2014-02-09T02:06:36Z
```
username_0: _<NAME> on 2009-12-07T07:57:46Z says:_
good default color settings for deleted and completed items would be
color.completed=invert # bg/fg reversal of colors
and
color.deleted=black on_white # based on the opposite of light/dark terminal defaults
these should stand out hard.
If the first started out as "white on_black", and was subject to no other color rules,
the effect on both would be similar (identical ?)
but that's ok.
username_0: _<NAME> on 2011-07-09T23:01:39Z says:_
New rules rc.color.completed and rc.color.deleted implemented. |
MarimerLLC/csla | 351676739 | Title: DataAnnotations: what are the conditional compile symbols on [CommonRules.DataAnnotation(..) in CSLA 4?
Question:
username_0: Hi, I've noticed that there are conditional compilation symbols on this method:
#if (ANDROID || IOS) || NETFX_PHONE
..
#elif NETFX_CORE
..
#else
..
#endif
The [ValidationContext] is only used when [ANDROID] [IOS] [NETFX_PHONE] or [NETFX_CORE] are defined.
I'm building an attribute based validation rule that saved the name of a Property as [string] and is associated with the Attribute declaration above the properties that use them. The actual property is of a complex type whose properties change according to the defining object state. So, I need to [get] this [validation associated property value] each time the validating property is validated.
I can see that [ValidationContext] is initialized with the [target] property that [DataAnnotation(object target, …)] receives as a parameter. But it's only done when [ANDROID] [IOS] [NETFX_PHONE] or [NETFX_CORE] are defined.
I know that this generates a strongly coupled scenario. I'm mitigating the effects by using interfaces.
I'm trying to use this mechanism so it can be used on any CLSA 4 supported UI platforms.
2 questions:
1. When am I supposed to use these symbols: [ANDROID] [IOS] [NETFX_PHONE] or [NETFX_CORE]
2. What do you suggest me to do (apart from building regular CSLA style validation rules) in order to have [DataAnnotation(…)] always call [args.Attribute.GetValidationResult(pValue, ctx)] instead of [args.Attribute.IsValid(pValue)]. Should a new conditional compile symbol be defined? Or may be a new type [CslaValidationAttribute] containing a bool property indicating whether to use [args.Attribute.GetValidationResult(pValue, ctx)] or not (being false by default).
Thanks.
Answers:
username_1: I have been slowly removing the old compiler directives from the code, and would love any help!
If you'd like to submit a pull request with the changes I would appreciate it!
username_0: Sure Rocky, I'll generate a pull request. Glad to participate and help.
username_1: This is slowly but surely happening as code gets updated for other issues, so I'm going to close this overarching issue.
Status: Issue closed
|
DMTF/Redfish-Mockup-Creator | 725202866 | Title: KeyError odata.id with fenghuo R1200 V5
Question:
username_0: Here is the output when I run creator with my Fenghuo R1200 V5 server in my lab.
If this error can be ignored so I can continue with rest of resources ?
```
# ./copy_Fenghuo_R1200-V5.sh
#
# rhost Redfish Protocol Versions: GET /redfish
# rhost: 10.214.59.200
# full directory path: /root/Redfish-Mockup-Creator/Fenghuo_R1200-V5
# description:
# starting mockup creation
# Creating /redfish resource
# Creating /redfish/v1 resource
# Creating /redfish/v1/odata resource
# Creating /redfish/v1/$metadata resource
# Start Creating resources under root service:
# Creating resource at: /redfish/v1/Systems
# Creating resource at: /redfish/v1/Systems/1
# Creating resource at: /redfish/v1/Systems/1/Processors
# Creating resource at: /redfish/v1/Systems/1/Processors/1
# Creating resource at: /redfish/v1/Systems/1/Processors/2
# Creating resource at: /redfish/v1/Systems/1/Memory
# Creating resource at: /redfish/v1/Systems/1/Memory/1
# Creating resource at: /redfish/v1/Systems/1/Memory/2
# Creating resource at: /redfish/v1/Systems/1/Memory/3
# Creating resource at: /redfish/v1/Systems/1/Memory/4
# Creating resource at: /redfish/v1/Systems/1/Memory/5
# Creating resource at: /redfish/v1/Systems/1/Memory/6
# Creating resource at: /redfish/v1/Systems/1/Memory/7
# Creating resource at: /redfish/v1/Systems/1/Memory/8
# Creating resource at: /redfish/v1/Systems/1/Memory/9
# Creating resource at: /redfish/v1/Systems/1/Memory/10
# Creating resource at: /redfish/v1/Systems/1/Memory/11
# Creating resource at: /redfish/v1/Systems/1/Memory/12
# Creating resource at: /redfish/v1/Systems/1/Memory/13
# Creating resource at: /redfish/v1/Systems/1/Memory/14
# Creating resource at: /redfish/v1/Systems/1/Memory/15
# Creating resource at: /redfish/v1/Systems/1/Memory/16
# Creating resource at: /redfish/v1/Systems/1/Memory/17
# Creating resource at: /redfish/v1/Systems/1/Memory/18
# Creating resource at: /redfish/v1/Systems/1/Memory/19
# Creating resource at: /redfish/v1/Systems/1/Memory/20
# Creating resource at: /redfish/v1/Systems/1/Memory/21
# Creating resource at: /redfish/v1/Systems/1/Memory/22
# Creating resource at: /redfish/v1/Systems/1/Memory/23
# Creating resource at: /redfish/v1/Systems/1/Memory/24
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/1
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/2
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/3
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/4
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/5
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/6
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/7
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/8
# Creating resource at: /redfish/v1/Systems/1/SimpleStorage
# Creating resource at: /redfish/v1/Systems/1/SimpleStorage/1
# Creating resource at: /redfish/v1/Systems/1/Storage
# Creating resource at: /redfish/v1/Systems/1/Storage/HA-RAID
[Truncated]
# Skipping already processed resource at: /redfish/v1/Chassis/HA-RAID.0.StorageEnclosure.0/Drives/Disk.Bay.1
# Creating resource at: /redfish/v1/Systems/1/LogServices
# Creating resource at: /redfish/v1/Systems/1/LogServices/Log1
# Creating resource at: /redfish/v1/Systems/1/LogServices/Log1/Entries
Traceback (most recent call last):
File "redfishMockupCreate.py", line 826, in <module>
main(sys.argv)
File "redfishMockupCreate.py", line 565, in main
addCopyright, addHeaders, addTime, exceptionList)
File "redfishMockupCreate.py", line 604, in recursive_call
addCopyright, addHeaders, addTime, exceptionList)
File "redfishMockupCreate.py", line 604, in recursive_call
addCopyright, addHeaders, addTime, exceptionList)
File "redfishMockupCreate.py", line 604, in recursive_call
addCopyright, addHeaders, addTime, exceptionList)
[Previous line repeated 2 more times]
File "redfishMockupCreate.py", line 594, in recursive_call
1, " Creating resource at: {}".format(x["@odata.id"]))
KeyError: '@odata.id'
```
Answers:
username_1: Unfortunately `@odata.id` is pretty critical to generating the file structure of the mockup. If that property is not present, then we don't have a hardened method for creating the appropriate folders in the hierarchy.
username_2: @username_0 Can you try out PR #53 and verify if it solves this issue?
username_0: @username_2 With the PR #53 , the previous error does not block the processing.
```
#
# rhost Redfish Protocol Versions: GET /redfish
# rhost: 10.214.59.193
# full directory path: /root/Redfish-Mockup-Creator/Fenghuo_R1200-V5
# description:
# starting mockup creation
# Creating /redfish resource
# Creating /redfish/v1 resource
# Creating /redfish/v1/odata resource
# Creating /redfish/v1/$metadata resource
# Start Creating resources under root service:
# Creating resource at: /redfish/v1/Systems
# Creating resource at: /redfish/v1/Systems/1
# Creating resource at: /redfish/v1/Systems/1/Processors
# Creating resource at: /redfish/v1/Systems/1/Processors/1
# Creating resource at: /redfish/v1/Systems/1/Processors/2
# Creating resource at: /redfish/v1/Systems/1/Memory
# Creating resource at: /redfish/v1/Systems/1/Memory/1
# Creating resource at: /redfish/v1/Systems/1/Memory/2
# Creating resource at: /redfish/v1/Systems/1/Memory/3
# Creating resource at: /redfish/v1/Systems/1/Memory/4
# Creating resource at: /redfish/v1/Systems/1/Memory/5
# Creating resource at: /redfish/v1/Systems/1/Memory/6
# Creating resource at: /redfish/v1/Systems/1/Memory/7
# Creating resource at: /redfish/v1/Systems/1/Memory/8
# Creating resource at: /redfish/v1/Systems/1/Memory/9
# Creating resource at: /redfish/v1/Systems/1/Memory/10
# Creating resource at: /redfish/v1/Systems/1/Memory/11
# Creating resource at: /redfish/v1/Systems/1/Memory/12
# Creating resource at: /redfish/v1/Systems/1/Memory/13
# Creating resource at: /redfish/v1/Systems/1/Memory/14
# Creating resource at: /redfish/v1/Systems/1/Memory/15
# Creating resource at: /redfish/v1/Systems/1/Memory/16
# Creating resource at: /redfish/v1/Systems/1/Memory/17
# Creating resource at: /redfish/v1/Systems/1/Memory/18
# Creating resource at: /redfish/v1/Systems/1/Memory/19
# Creating resource at: /redfish/v1/Systems/1/Memory/20
# Creating resource at: /redfish/v1/Systems/1/Memory/21
# Creating resource at: /redfish/v1/Systems/1/Memory/22
# Creating resource at: /redfish/v1/Systems/1/Memory/23
# Creating resource at: /redfish/v1/Systems/1/Memory/24
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/1
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/2
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/3
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/4
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/5
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/6
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/7
# Creating resource at: /redfish/v1/Systems/1/EthernetInterfaces/8
# Creating resource at: /redfish/v1/Systems/1/SimpleStorage
# Creating resource at: /redfish/v1/Systems/1/SimpleStorage/1
# Creating resource at: /redfish/v1/Systems/1/Storage
# Creating resource at: /redfish/v1/Systems/1/Storage/HA-RAID
# Creating resource at: /redfish/v1/Chassis/HA-RAID.0.StorageEnclosure.0/Drives/Disk.Bay.0
# Creating resource at: /redfish/v1/Systems/1/Storage/HA-RAID/Volumes/Controller.0.Volume.0
# Skipping already processed resource at: /redfish/v1/Chassis/HA-RAID.0.StorageEnclosure.0/Drives/Disk.Bay.0
[Truncated]
# Creating resource at: /schemas/v1/Syslog.v1_0_0.json
# Skip parsing of Location reference: /schemas/v1/Syslog.v1_0_0.json
# Creating resource at: /redfish/v1/JsonSchemas/SMTP.v1_0_0
# Creating resource at: /schemas/v1/SMTP.v1_0_0.json
# Skip parsing of Location reference: /schemas/v1/SMTP.v1_0_0.json
# Creating resource at: /redfish/v1/JsonSchemas/SmcFirmwareInventory.v1_0_0
# Creating resource at: /schemas/v1/SmcFirmwareInventory.v1_0_0.json
# Skip parsing of Location reference: /schemas/v1/SmcFirmwareInventory.v1_0_0.json
# Creating resource at: /redfish/v1/JsonSchemas/IPAccessControl.v1_0_0
# Creating resource at: /schemas/v1/IPAccessControl.v1_0_0.json
# Skip parsing of Location reference: /schemas/v1/IPAccessControl.v1_0_0.json
# Creating resource at: /redfish/v1/JsonSchemas/SmcLogEntryExtensions
# Creating resource at: /schemas/v1/SmcLogEntryExtensions.json
# Skip parsing of Location reference: /schemas/v1/SmcLogEntryExtensions.json
# Creating resource at: /redfish/v1/JsonSchemas/SmcPowerExtensions.v1_0_0
# Creating resource at: /schemas/v1/SmcPowerExtensions.v1_0_0.json
# Skip parsing of Location reference: /schemas/v1/SmcPowerExtensions.v1_0_0.json
# Skipping already processed resource at: /redfish/v1/SessionService/Sessions
# redfishMockupCreate Completed creating mockup
```
username_2: Great. Thanks for testing it.
Status: Issue closed
|
elixir-lang/elixir | 450835712 | Title: Compiler crash on recursive macro guards
Question:
username_0: # Versions
```
➜ ~ elixir -v
Erlang/OTP 22 [erts-10.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe] [dtrace]
Elixir 1.8.2 (compiled with Erlang/OTP 21)
# OS: 64bit Mac OS X 10.14.4 18E226
# Kernel: x86_64 Darwin 18.5.0
```
# Minimal failing example
```ex
defmodule CrashBeam do
defmacro foo(a) when is_atom(a) do
foo([a])
end
end
```
# Example usage
```ex
defmodule CrashBeam do
defmacro foo(a) when is_atom(a) do
foo([a])
end
defmacro foo(a) when is_list(a) do
quote do
unquote(a)
end
end
end
```
### Current behavior
```
Compiling 1 file (.ex)
Function: foo/1
== Compilation error in file lib/crash_beam.ex ==
** (CompileError) lib/crash_beam.ex: internal error in beam_ssa_opt;
crash reason: {case_clause,
{'EXIT',
{{badkey,{b_local,{b_literal,foo},1}},
[{erlang,map_get,
[{b_local,{b_literal,foo},1},
#{{b_local,{b_literal,'MACRO-foo'},2} =>
{st,
#{0 =>
{b_blk,#{},
[{b_set,#{},{b_var,'@ssa_bool'},{bif,is_atom},[{b_var,1}]}],
{b_br,#{},{b_var,'@ssa_bool'},5,4}},
3 =>
{b_blk,#{},
[{b_set,#{},{b_var,9},put_list,[{b_var,1},{b_literal,[]}]},
{b_set,#{},{b_var,10},put_list,[{b_var,0},{b_var,9}]},
{b_set,
#{location => {"lib/crash_beam.ex",2}},
[Truncated]
[{file,"compile.erl"},{line,396}]},
{compile,fold_comp,4,[{file,"compile.erl"},{line,423}]},
{compile,internal_comp,5,[{file,"compile.erl"},{line,407}]}]}}}
in function compile:'-select_passes/2-anonymous-2-'/3 (compile.erl, line 672)
in call from compile:'-internal_comp/5-anonymous-1-'/3 (compile.erl, line 396)
in call from compile:fold_comp/4 (compile.erl, line 423)
in call from compile:internal_comp/5 (compile.erl, line 407)
in call from compile:'-do_compile/2-anonymous-0-'/2 (compile.erl, line 207)
in call from elixir_erl_compiler:compile/4 (src/elixir_erl_compiler.erl, line 52)
in call from elixir_erl:load_form/5 (src/elixir_erl.erl, line 439)
in call from elixir_erl_compiler:'-spawn/2-fun-0-'/3 (src/elixir_erl_compiler.erl, line 12)
(stdlib) lists.erl:1338: :lists.foreach/2
(elixir) src/elixir_erl_compiler.erl:12: anonymous fn/3 in :elixir_erl_compiler.spawn/2
(elixir) lib/kernel/parallel_compiler.ex:208: anonymous fn/4 in Kernel.ParallelCompiler.spawn_workers/6
```
### Expected behavior
Should return something like that: `[a]`
Answers:
username_1: Thanks for the report! Notice that this will never compile though: a macro is invoked at compile time, so we can't invoke yourselves at compilation time while we are being defined. Although the error message needs to be much better than what it is today.
username_2: @username_1 this has bugged me for some time in Earmark and I was just to report a complex case, great job @username_0 to create an example :+1:. However just in case it is useful: https://github.com/pragdave/earmark/tree/internal-beam-error
Quite time consuming to debug this, so yes a nice error message, <3 <3 <3 for fixing this.
Status: Issue closed
username_1: See #9111. :)
username_2: I am not sure about this, here is the delta between the failing compilation and the one working
```
diff --git a/lib/earmark/helpers/lookahead_helpers.ex b/lib/earmark/helpers/lookahead_helpers.ex
index bb3efca..de344a3 100644
--- a/lib/earmark/helpers/lookahead_helpers.ex
+++ b/lib/earmark/helpers/lookahead_helpers.ex
@@ -104,16 +104,17 @@ defmodule Earmark.Helpers.LookaheadHelpers do
end
defp _read_list_lines(lines, result, params, indent)
- defp _read_list_lines([%Line.Blank{} | rest], result, params) do
+ defp _read_list_lines([%Line.Blank{} | rest], result, params, indent) do
# Behavior which lines are contained in the list changes dramatically after
# the first blank line.
- _read_spaced_list_lines(rest, [""|result], params)
+ _read_spaced_list_lines(rest, [""|result], params, indent)
end
# Same list type, continue slurping...
defp _read_list_lines(
[ %Line.ListItem{bullet: new_bullet, line: line} | rest],
result,
- params = %{bullet: old_bullet, pending: nil}
+ params = %{bullet: old_bullet, pending: nil},
+ indent
)
when new_bullet == old_bullet do
with {pending1, pending_lnb1} = opens_inline_code(line),
@@ -122,13 +123,15 @@ defmodule Earmark.Helpers.LookaheadHelpers do
params
| pending: pending1,
pending_lnb: pending_lnb1
- })
+ }, indent)
+
end
# Not the same list type, we are done
defp _read_list_lines(
[ %Line.ListItem{} | _] = rest,
result,
- _params
+ _params,
+ _indent
)
do
{false, Enum.reverse(result), rest}
@@ -137,26 +140,28 @@ defmodule Earmark.Helpers.LookaheadHelpers do
defp _read_list_lines(
[ %Line.Ruler{} | _] = rest,
result,
- _params
+ _params,
+ _indent
) do
{false, Enum.reverse(result), rest}
end
# Other text needs slurping...
- defp _read_list_lines([%{line: line} | rest], result, params = %{pending: nil}) do
+ defp _read_list_lines([%{line: line} | rest], result, params = %{pending: nil}, indent) do
with {pending1, pending_lnb1} = opens_inline_code(line),
[Truncated]
| pending: pending1,
pending_lnb: pending_lnb1
- })
+ }, indent)
end
# Running into EOI insise an open multiline inline code block
- defp _read_list_lines([], result, _params) do
+ defp _read_list_lines([], result, _params, _indent) do
{false, Enum.reverse(result), []}
end
@@ -204,7 +209,7 @@ defmodule Earmark.Helpers.LookaheadHelpers do
_read_spaced_list_lines(rest, [line|result], _opens_inline_code(line, params), indent)
end
# Got to the end
- defp _read_spaced_list_lines([], result, _params) do
+ defp _read_spaced_list_lines([], result, _params, _indent) do
{true, _remove_trailing_blank_lines(result, []), []}
end
```
username_2: So this is the delta between https://github.com/pragdave/earmark/tree/internal-beam-error &
https://github.com/pragdave/earmark/tree/tmp 75a04a85f01ff87fe8311c7db2b472bf6fe114ca |
sanity-io/sanity | 753359655 | Title: Studio doesn't load in browser if project path contains a '#'.
Question:
username_0: **Describe the bug**
If the path to the project contains a `#` character, e.g. `/Volumes/HDD/Users/ian/Projects/# My Project/Studio/`, opening the running studio in a browser shows the 'Connecting to Sanity.io' loading screen, but never progresses further.
Looking in the browser dev tools, the built-in server is returning 404s for `app.bundle.js` and `vendor.bundle.js`, even though they have been built and are present in the `static` directory.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a directory with a # somewhere in the name and move to it
2. `sanity init && sanity start`
3. Open http://localhost:3333 in a browser
**Expected behavior**
The Studio should connect to Sanity and load the UI.
**Which versions of Sanity are you using?**
@sanity/cli 2.0.9 (up to date)
@sanity/base 2.0.9 (up to date)
@sanity/components 2.0.9 (up to date)
@sanity/core 2.0.9 (up to date)
@sanity/default-layout 2.0.9 (up to date)
@sanity/default-login 2.0.9 (up to date)
@sanity/desk-tool 2.0.9 (up to date)
@sanity/vision 2.0.9 (up to date)
**What operating system are you using?**
macOS Big Sur 11.0.1
**Which versions of Node.js / npm are you running?**
6.14.9
v14.15.1
(Also tested with Node v15.3.0)
**Additional context**
Adding a # to the start of a directory name is a cheap 'n' dirty way to make a directory appear at the top of a column in the macOS Finder. |
jlippold/tweakCompatible | 541318656 | Title: `Tweak Count 2` working on iOS 12.4.4
Question:
username_0: ```
{
"packageId": "com.alex.tweakcount2",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.alex.tweakcount2",
"deviceId": "iPad4,1",
"url": "http://cydia.saurik.com/package/com.alex.tweakcount2/",
"iOSVersion": "12.4.4",
"packageVersionIndexed": true,
"packageName": "Tweak Count 2",
"category": "Tweaks",
"repository": "BigBoss",
"name": "Tweak Count 2",
"installed": "1.0.1",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.alex.tweakcount2",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "count of installed tweaks in Cydia",
"latest": "1.0.1",
"author": "Alexandre",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
OpenNMT/CTranslate2 | 732230760 | Title: Docker and import Python
Question:
username_0: Hi,
If I want to use OpenNMT server with ctranslate2 how can I install Ctranslate2 in order to work? It should be in the virtual environment, shouldn't it? Is it possible to install it with the docker or do I need to install it manually?
Thanks
Answers:
username_1: Can you be more specific about your issue?
You can install CTranslate2 with just `pip install ctranslate2`.
username_0: I want GPU support in CTranslate2 and be able to use OpenNMT-py's onmt_server command
username_0: The following Dockerfile should fit my needs. Thanks
```
FROM opennmt/ctranslate2:latest-ubuntu18-cuda10.2
RUN pip install OpenNMT-py
# Reset entrypoint
ENTRYPOINT []
```
username_1: Looks like the error is pretty clear. See here for the required driver version: https://docs.nvidia.com/deploy/cuda-compatibility/index.html (Table 1)
username_0: Aren't the drivers installed? I didn't find this explained in the documentation
username_1: There are many resources online explaining how to run Docker with GPU support.
Here's one from the official website: https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu
username_0: Ok, thanks for your time and information! I finally managed to install everything.
Status: Issue closed
username_0: It might not be related but, when I set the entrypoint in exec mode:
`ENTRYPOINT ["onmt_server"]`
I get an encoding issue:
`UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 396: ordinal not in range(128)`
However, when executing in shell form
`ENTRYPOINT onmt_server`
it works perfectly. Any guess?
username_2: @username_0 @username_1
Below is my Dockerfile
FROM opennmt/ctranslate2:latest-ubuntu18-cuda10.2
COPY / /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["python", "/app/app.py"]
when I am trying to run built my image it is giving error as "missing model"
However when I run like below
FROM python:3.6.9
COPY / /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["python", "/app/app.py"]
It work fine.
In this case ctranslate2 package is in my requirements.txt file.
What could be the issue?
username_1: Use `ENTRYPOINT` instead of `CMD` to override the image entrypoint. The default entrypoint of the image is defined here:
https://github.com/OpenNMT/CTranslate2/blob/master/docker/Dockerfile.ubuntu-gpu#L116
username_0: Hi
I have another issue. How should I configure Apache to serve the `onmt_server`? I guess I need Apache or similar to run it in production... Should I configure Apache inside the docker?
Thanks
username_1: This is unrelated to CTranslate2 so you should probably ask this question elsewhere. You can try asking on the [forum](https://forum.opennmt.net/)
username_2: When I use ENTRYPOINT instead CMD in Dockerfile:
FROM opennmt/ctranslate2:latest-ubuntu18-cuda11.0
COPY / /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install -r src/requirements.txt
ENTRYPOINT ["python", "/app/src/app.py"]
I get below errror
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
@username_1
username_2: @username_1
Getting this error
RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version
Although I have the latest driver NVIDIA-SMI 455.32.00 Driver Version: 455.32.00
I am using below Dockerfile
FROM opennmt/ctranslate2:latest-ubuntu18-cuda11.0
COPY / /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install -r src/requirements.txt
ENTRYPOINT ["python3", "/app/src/app.py"]
username_2: @username_1
the python version in this image opennmt/ctranslate2:1.16.2-ubuntu18-cuda11.0
does not support perl based command?? is it slim version?
Below one works fine for me
FROM python:3.6.9
username_1: Only the components required to run CTranslate2 are installed in the image.
You can install additional packages by extending the CTranslate2 image. First, create a new `Dockerfile` for example:
```text
FROM opennmt/ctranslate2:1.16.2-ubuntu18-cuda11.0
RUN apt-get update && apt-get install -y perl
```
Then build it. |
open-telemetry/opentelemetry-go | 759834087 | Title: Tracer configurations: how to set Export timeout()?
Question:
username_0: I haven't found any way to configure an export timeout for an OTLP ExportSpans() method. I believe this should be configurable for users.
#1378 refactors the way this setting works in the Metrics SDK. I believe such a configuration should be independent of the trace exporter--probably it belongs in the Span Batcher?
Answers:
username_1: We can update to support a context timeout from the first arg. We will just need to set the timeout.
This can be done post-GA.
username_1: Resolved in https://github.com/open-telemetry/opentelemetry-go/pull/1755
That adds an `ExportTimeout` option to the `BatchSpanProcessor`. Both the HTTP and gRCP protocol drivers for the OTLP honor the timeout of the passed context: https://github.com/open-telemetry/opentelemetry-go/issues/1613#issuecomment-805297026
Status: Issue closed
username_0: 😀 Thank you! |
jbetancur/react-data-table-component | 609310562 | Title: Typing for IDataTableProps is Incorrect
Question:
username_0: ## Issue Check list
- [x] Agree to the [Code of Conduct](https://github.com/username_1/react-data-table-component/blob/master/CODE-OF-CONDUCT.md)
- [x] Read the README
- [x] You are using React 16.8.0+
- [ ] You installed `styled-components`
- [x] Include relevant code or preferably a [code sandbox](https://codesandbox.io/embed/react-data-table-sandbox-ccyuu
)
## Describe the bug
The typing for `IDataTableProps` is making `paginationServerOptions` required.
## To Reproduce
Install version 6.9.0 and use TypeScript.
## Expected behavior
I would expect that `paginationServerOptions` would be optional based upon the information provided at https://github.com/username_1/react-data-table-component#pagination.
## Code Sandbox, Screenshots, or Relevant Code

## Versions (please complete the following information)
- React: 16.13.1
- TypeScript: 3.8.3
- Styled Components: N/A
- OS: MacOS 10.15.4
- Browser: N/A
- VS Code: 1.44.2
## Additional context
N/A<issue_closed>
Status: Issue closed |
MSO4SC/MSOPortal | 379057373 | Title: List of apps too long and sorted
Question:
username_0: The list of registered applications is too long (UX issue) and unsorted.
this should be fixed by at least sorting the entries and providing a better interface such as a reactjs component / dropdown to have automated completion
Answers:
username_1: @username_0 , Ok, If understand the issue, the problem is not to have a long list. The list is as long as the number of applications you (developers) want to provide. I think this is quite reasonable.
I think the issue is more related with the visual component that allows the user to select an application.
If I'm right, can you provide a better title for the issue? Something like (ie): improve list-of-apps graphical component. Thanks! |
ChrisGeorgakidis/DHT22---XBEE-Communication | 344368111 | Title: Program Auto Reset
Question:
username_0: The program resets automatically after a certain period of time.
Answers:
username_0: Microcontrollers have a watchdog. This is a counter which is initialized to a specific value and every instruction the microcontroller runs, causes decrease in its values by 1. When the watchdog reaches 0, microcontroller believes that the program is takes too long time to finish and thinks something is going wrong. In order to avoid waiting too long, it reset its execution.
SOLVE: In order to avoid doing auto-reset, we must call after a long set of computations the function sys_watchdog_reset(), which resets watchdog's value.
Status: Issue closed
|
aryehof/dart-eventsubscriber | 625353640 | Title: Best practice question
Question:
username_0: Hey there,
Thanks for making these packages (Event and EventSubscriber). I just started using them yesterday and mixing them with the "[get_it](https://pub.dev/packages/get_it)" service locator package has solved several issues I was having when trying to use only the Provider package.
I had a question though about a statement from the documentation. You mentioned, "best practice is that generally on notification one queries the domain model directly, rather than having arguments deliver data to a consumer of the domain model". You then say "The supply of data shown here is by way of example only."
I am guessing that means the example is not actually the best way to go about using the package. Do you happen to have an example of what the actual best-practice way of using it might be compared to what was shown? I definitely would like to make sure I am learning and implementing things as properly as I can.
Thanks!
-MH
Answers:
username_1: Hi. It's more of a convention than a rule when dealing with a separate domain model. One can choose to pass information about the event with the event, but with complex systems over time, lots of events with information attached can be hard to manage. It will typically lead to decisions about the problem domain being placed in _one_ of the consumers of the model (like a UI), rather than all being in the problem domain model.
It's rare to actually have a separate object domain model these days, so my thought is not to worry too much and just pass data with the event, or instead just notify and then query the model... as you see fit.
Hopefully this helps a bit?
Status: Issue closed
|
Azure/azure-cli | 823812814 | Title: UserData for VM and VM ScaleSets (Gartner - high priority)
Question:
username_0: **Resource Provider**
<!--- What is the Azure resource provider your feature is part of? --->
Compute Resource Provider
**Description of Feature or Work Requested**
<!--- Provide a brief description of the feature or work requested. A link to conceptual documentation may be helpful too. --->
https://microsoft.sharepoint.com/:w:/r/teams/ComputeVM/_layouts/15/Doc.aspx?sourcedoc=%7B13AE3881-1B63-4D2E-9334-71E687A4A61D%7D&file=UserDataRequirementsForCLI.docx&action=default&mobileredirect=true
This new UserData feature is essentially a new and improved version of the existing CustomData(https://docs.microsoft.com/en-us/azure/virtual-machines/custom-data). We are trying to catch up with AWS, which already has this feature.
**Minimum API Version Required**
<!--- What is the minimum API version of your service required to implement your feature? --->
2021-03-01
**Swagger Link**
<!--- Provide a link to the location of your feature(s) in the REST API specs repo. If your feature(s) has corresponding commit or pull request in the REST API specs repo, provide them. This should be on the master branch of the REST API specs repo. --->
https://github.com/Azure/azure-rest-api-specs/pull/13220
**Target Date**
<!--- If you have a target date for release of this feature/work, please provide it. While we can't guarantee these dates,
it will help us prioritize your request against other requests. --->
We would like to GA on April 30.
This new feature is for Gartner and we are on a tight deadline.
Answers:
username_1: hi @username_2, let's check the availability of 2021-03-01 API version and SDK release ETA, then communicate back on CLI plan.
username_2: @username_0 When will the 2021-03-01 API be merged into master branch?
username_0: Api version for 2021-03-01 is already available for CRP in all regions.
UserData for regular VMs is currently rolling out.
UserData for VMSS is not done yet and may not being rolling out until week of April 5.
Swagger PR for UserData (for both VM and VMSS) may not finish until week of April 12, unfortunately.
username_1: hi @username_0 , Swagger change https://github.com/Azure/azure-rest-api-specs/pull/13220#pullrequestreview-633218158 need to be checked in firstly as prerequisite for Azure CLI work. The workflow is: Swagger merged into master -> Python SDK released -> Azure CLI development and release(Azure CLI depends on Python SDK)
username_3: @username_0 Hi, I would like to ask what is the expected help for `UserData` as a new parameter?
Could I use the description from Swagger
https://github.com/Azure/azure-rest-api-specs/blob/master/specification/compute/resource-manager/Microsoft.Compute/stable/2021-03-01/compute.json#L11047-L11050
username_4: @username_0 any feedback?
username_3: @username_0 Hi, may I ask the verification of max limit for `UserData` should be the logic of the server side, right? I didn't see any max limit checks for `customData` on the client side.
username_0: Yes, CRP does server-side validation.
If customData doesn't have client-side validation, we can follow the same pattern for userData.
username_3: @username_0 Hi, in this regard, I would like to confirm with you two questions:
1. Which query commands need to support querying `UserData` property?
From the perspective of REST API, only GET requests *([virtualmachines_get](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/get) and [virtualmachinescalesets_get](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachinescalesets/get))* support parameter `expand`, while LIST requests *([virtualmachines_list](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/list) and [virtualmachinescalesets_list](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachinescalesets/list))* do not. So we just need commands `az vm show` and `az vmss show` to support querying `UserData`, right?
2. Should querying `UserData` property be the default behavior, or should it be triggered by user input parameters? If it needs a parameter, what is the name of this parameter?
username_0: Answers inline, thank you.
username_3: 3. What is the expected help information of parameter `--userdata`? Could I use the description from Swagger: UserData for the VM.
username_0: Answers inline, thank you.
username_3: @username_0 Hi, I want to confirm that this requirement means that `az vm update/az vmss update` also needs to support updating parameter `--user-data`, right? And when `--user-data` passed in is an empty string, it will clear the existing value, right?
username_0: Yes, that is exactly right.
Status: Issue closed
|
Vend-ng/Vend.Repo | 499193161 | Title: Database - Setup
Question:
username_0: We need to set up the database with Microsoft Azure
This will require us to create a group Azure account.
This setup is just creating the account and making sure the database is ready to be interfaced with (as in adding tables and writing queries) |
loopspace/Codea-Shapes | 52802695 | Title: Lighting shader efficiency
Question:
username_0: Given that (so far, at least) each mesh triangle has the same normal for each vertex, then all of the lighting calculation can be done in the vertex shader, greatly reducing the overall amount of calculation required.
I've uploaded a modified shader that does this.<issue_closed>
Status: Issue closed |
Azure/azure-iot-cli-extension | 571638403 | Title: Dependency is not installed when extension is added
Question:
username_0: ## Describe the bug
**Command Name**
`az iot hub monitor-events
Extension Name: azure-cli-iot-ext. Version: 0.8.9.`
**Errors:**
`Dependency update (uamqp 1.2) required for IoT extension version: 0.8.9.`
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az iot hub monitor-events -n {} -t {}`
## Expected Behavior
## Environment Summary
```
Darwin-19.3.0-x86_64-i386-64bit
Python 3.7.4
Shell: bash
azure-cli 2.0.81 *
azure-cli-iot-ext 0.8.9
Extensions:
azure-cli-iot-ext 0.8.9
aks-preview 0.4.32
application-insights 0.1.3
```
## Additional Context
uamqp is installed with an out of band method rather than declared in setup.py like the other dependencies (paho-mqtt, jsonschema, setuptools). This is hindering automation where iot extension is required such that either uamqp has to be installed in the python environment manually or user intervention is required.
This out of band method seems to stem from a workaround that dates back to 2017 for homebrew provided python3/pip3 breaking the --target option.
This bug seems like it has since been addressed as per https://github.com/pypa/pip/pull/4557 and the current version of pip3 (19.2.3) can be confirmed as implementing this feature fine.
Please remove this workaround as it seems like it no longer has any relevance and is causing an unnecessary exception/workaround for automation use cases.
Answers:
username_1: Thanks for the issue @username_0 . There is another reason for the out of band install of uamqp. That's due to uamqp being a C extension which has been built for a common set of environments. While uamqp has good compatibility it does not match pure python. Instead of reducing the compatibility of the IoT extension to uamqp its installed out of band.
Your scenario can be resolved by appending "-y" or "--yes" for any amqp command. For example:
`az iot device c2d-message send -d MyDevice -n MyHub --yes`
username_1: Closing issue since the immediate scenario is resolved. We can revisit uamqp dependency management later.
Status: Issue closed
|
flightlessmango/MangoHud | 957104030 | Title: Memory Leak
Question:
username_0: Hello,
Thank you for creating this tool for Linux Gamers.
I couldn't decide where to submit this issue here or Goverlay but it has something to do with Distro info config. If I enable distro version in the config it causes a memory leak. It tries to get distro info with lsb_release -a command and task manager gets flooded with this command and filling all the ram with this command until there is none and crashes the game I couldn't take a screenshot of it because it freezes my PC but after disabling distro version config there is no issue. Should I submit it to Goverlay as well ?
Thanks
Answers:
username_0: 

I was able to take a screenshot
username_0: And this is without distro info


username_1: I do not use Goverlay, but can you disable those options and just add this to your MangoHud.conf file and see if you experience the same issue?
```
custom_text=Distro
exec=lsb_release -a | grep Description | cut -c 14-26
```
I just tested and had no issues
username_0: Thank you for your reply but still same. I'll disable it. I was just using it for the screenshots.

username_2: If you run that `lsb_release | grep` etc command manually, does it exit?
username_0: 
Yeah it exits.
username_3: This should be fixed in master
Status: Issue closed
|
gilhardl/ng-strapi-auth | 433134033 | Title: Rendre public saveCredentials() et unsaveCredentials()
Question:
username_0: Cela peut etre utile si l'on souhaite modifier un utilisateur juste après son inscription. Le module gardera en memoire et dans le localStorage l'utilisateur au moment où il s'est inscrit, non modifié.<issue_closed>
Status: Issue closed |
stryker-mutator/stryker-net | 976221384 | Title: Update all assembly versions when releasing new Stryker package
Question:
username_0: We should probably add the updating of this version number to the prepare-release.js script we use to bump the version number in the other assemblies.
_Originally posted by @username_0 in https://github.com/stryker-mutator/stryker-net/pull/1664#discussion_r693360626_<issue_closed>
Status: Issue closed |
sidorares/node-mysql2 | 340730306 | Title: Config property is undefined when using promise wrapper.
Question:
username_0: Config property is undefined when using promise wrapper.
I see there is an underlying connection attached at "connection" property, but it breaks compatibility with typings and original mysql.
Answers:
username_1: can you show some code to explain better problem?
Promise wrapper is not always 1:1 compatible, but when make sense it should be
username_0: ```typescript
promiseConnection.connection.config // { host, user, database... }
promiseConnection.config // undefined, even though is defined in the interface
// but the first in typescript is looks like
(promiseConnection as any).connection.config // discouraged and unsafe
```
Of course you might think of it as a typing issue
username_1: where do you get typings from? I don't mind adding `.config` to promise wrapper as well but interesting to understand why your setup expected to see this property |
sofastack/sofa-boot | 508320555 | Title: Migrate samples to sofastack-guides
Question:
username_0: At present, the case engineering of related components and framework products are all placed in the corresponding warehouse. In order to facilitate unified control and simplify the code base, we move the samples of the sofastack product to the [sofastack-guides](https://github.com/sofastack-guides) space.
* [SOFABoot Samples](https://github.com/sofastack-guides/sofa-boot-guides)
* [SOFARPC Samples](https://github.com/sofastack-guides/sofa-rpc-guides)
* [SOFATracer Samples](https://github.com/sofastack-guides/sofa-tracer-guides)<issue_closed>
Status: Issue closed |
dojo/cli-create-theme | 285703144 | Title: Handle the case when a theme file already exists
Question:
username_0: When a theme file already exists, you see this message:
`A theme file already exists in ....src/themes/theme.ts. Will not overwrite.`
But we _do_ still generate the CSS modules. We should address this somehow, for example, print out the code in the terminal, and instruct the user to merge it with their existing theme file.
Answers:
username_1: I think we should just exit if the theme already exists.... |
mamiline6/sass_practice | 91582794 | Title: 練習:タブっぽいものを作る
Question:
username_0: @username_1
以下の様なデザインをコーディングしてください。

ただし、あくまでワイヤーなので、デザイン通りにする必要はありませんし、タブとして機能する必要はありません(JSは不要)
必要な要素としては以下です。
- タブとタブに対応するコンテンツ領域
- タブが選択された時にタブとタブに対応する要素にクラスがつく( `is-selected` とか)
ディレクトリは以下のようにしてください。
```
gulpfile.js
practice_tab/
src/
scss/
style.scss
app/
index.html
css/
style.css
```
- `app/index.html`は普通に書いてください。
- `src/scss/style.scss` が `app/css/style.css` に生成されるようにしてください
不明点があればここできいてください
Answers:
username_0: - 生成元・先が変わるので、gulpfileにタスクを追加して下さい。
- 必要に応じてbrowserSyncとかも入れてよいです。
username_1: ありがとうございました!
Status: Issue closed
|
cipchk/ngx-countdown | 806208005 | Title: "restart" works from click event but nut from handleEvent function
Question:
username_0: I can call restart() from a button click event manually and it works, but if do the same from the handleEvent it doesn't work:
@ViewChild('cd', { static: false }) private countdown: CountdownComponent;
<countdown #cd [config]="config" (event)="handleEvent($event)"></countdown>
<button (click)="onRestart()">restart</button>
handleEvent(event){
if(event.action === 'done'){
this.countdown.restart(); // this doesn't work
}
}
onRestart(){
this.countdown.restart(); // this works!
}
Answers:
username_1: `
})
export class AppComponent {
@ViewChild("cd", { static: false }) private countdown: CountdownComponent;
handleEvent(event: CountdownEvent) {
if (event.action === "done") {
setTimeout(() => {
this.countdown.restart(); // this work
})
}
}
}
```
Status: Issue closed
username_1: fixed by `11.0.2` |
kubernetes/kubernetes | 170291197 | Title: Detachment of volumes takes longer than expected
Question:
username_0: I'm running into a variety of issues here and there, but this one is consistent enough to note. I'm deploying an RC with a single Pod (single container). It's `nginx` with a single `azureDisk` volume/mount.
Using a preformatted, VHD, the attach goes okay. Not so much for the detach.
from `kube-apiserver`, I delete the rc/pod at 5:12:
```
I0809 05:12:21.303910 1 handlers.go:164] DELETE /api/v1/namespaces/default/pods/azure-volume-example-5utsg: (12.436659ms) 200 [[hyperkube/v1.4.0 (linux/amd64) kubernetes/186a81c] 10.240.0.4:49904]
I0809 05:12:21.861025 1 handlers.go:164] DELETE /api/v1/namespaces/default/replicationcontrollers/azure-volume-example: (2.881437ms) 200 [[kubectl/v1.4.0 (linux/amd64) kubernetes/186a81c] 172.16.58.3:53750]
```
from `kube-controller-manager`, the detach didn't happen until six minutes later at 5:18:
```
I0809 05:18:21.429653 1 reconciler.go:135] Started DetachVolume for volume "kubernetes.io/azure-disk/kube-registry-disk" from node "colemick-vhdtest5-node-0" due to maxWaitForUnmountDuration expiry.
I0809 05:20:21.717023 1 operation_executor.go:591] DetachVolume.Detach succeeded for volume "kubernetes.io/azure-disk/kube-registry-disk" (spec.Name: "disk-azuredisk") from node "colemick-vhdtest5-node-0".
```
Looking at `kubelet` on the node now... we can see the teardown succeed very quickly:
```
Aug 09 05:12:21 colemick-vhdtest5-node-0 docker[3592]: I0809 05:12:21.382500 3761 operation_executor.go:818] UnmountVolume.TearDown succeeded for volume "kubernetes.io/azure-disk/kube-registry-disk" (volume.spec.Name: "disk-azuredisk") pod "dfe2e9f4-5dee-11e6-b667-000d3a918549" (UID: "dfe2e9f4-5dee-11e6-b667-000d3a918549").
```
but the Unmount fails over and over (it's actually **still** failing)
```
Aug 09 05:12:21 colemick-vhdtest5-node-0 docker[3592]: I0809 05:12:21.460317 3761 reconciler.go:289] UnmountDevice operation started for volume "kubernetes.io/azure-disk/kube-registry-disk" (spec.Name: "disk-azuredisk")
Aug 09 05:12:21 colemick-vhdtest5-node-0 docker[3592]: E0809 05:12:21.488038 3761 nestedpendingoperations.go:232] Operation for "\"kubernetes.io/azure-disk/kube-registry-disk\"" failed. No retries permitted until 2016-08-09 05:12:21.988016098 +0000 UTC (durationBeforeRetry 500ms). Error: UnmountDevice.DeviceOpened failed for volume "kubernetes.io/azure-disk/kube-registry-disk" (spec.Name: "disk-azuredisk") with: PathIsDevice failed for path "0": stat 0: no such file or directory
[...]
Aug 09 05:26:29 colemick-vhdtest5-node-0 docker[3592]: I0809 05:26:29.621218 3761 reconciler.go:289] UnmountDevice operation started for volume "kubernetes.io/azure-disk/kube-registry-disk" (spec.Name: "disk-azuredisk")
Aug 09 05:26:29 colemick-vhdtest5-node-0 docker[3592]: E0809 05:26:29.621455 3761 nestedpendingoperations.go:232] Operation for "\"kubernetes.io/azure-disk/kube-registry-disk\"" failed. No retries permitted until 2016-08-09 05:28:29.621409049 +0000 UTC (durationBeforeRetry 2m0s). Error: UnmountDevice.DeviceOpened failed for volume "kubernetes.io/azure-disk/kube-registry-disk" (spec.Name: "disk-azuredisk") with: PathIsDevice failed for path "0": stat 0: no such file or directory
```
So, not sure why it took so long to detach the disk, why the unmounts are failing, or why it's still trying to unmount despite the fact that the volume was already detached.
Found from investigating https://github.com/kubernetes/kubernetes/pull/29836#issuecomment-238717361
CC: @username_1
Status: Issue closed
Answers:
username_1: This is [a problem](https://github.com/kubernetes/kubernetes/pull/29836#issuecomment-238723737) with PR #29836 not a general problem. Sorry for making you open this. Closing. |
swoole/swoole-src | 511382549 | Title: when open_http2_protocol=TRUE enable_static_handler seems stops working
Question:
username_0: Please answer these questions before submitting your issue. Thanks!
1. What did you do? If possible, provide a simple script for reproducing the error.
We enabled both open_http2_protocol = TRUE (along with SSL) and enable_static_handler = TRUE and we set document_root.
```php
<?php
$http = new Swoole\Http\Server('0.0.0.0', 8081, SWOOLE_PROCESS, SWOOLE_SOCK_TCP | SWOOLE_SSL);
$options = [
'enable_static_handler' => TRUE,
'document_root' => dirname(__FILE__),
'open_http2_protocol' => TRUE,
'ssl_cert_file' => './localhost.crt',
'ssl_key_file' => './localhost.key',
];
$http->set($options);
$http->on('request', function (Swoole\Http\Request $request, Swoole\Http\Response $response) {
$response->end('this is swoole worker');
});
$http->start();
```
Then load "https://localhost:8081/file.txt" (the file exists)
2. What did you expect to see?
output: "static content" (this is the content of "file.txt" - to be served by the static handler)
3. What did you see instead?
output: "this is swoole worker"
The request gets served by Swoole Worker instead of the static handler.
If just open_http2_protocol is set to FALSE it works as expected.
4. What version of Swoole are you using (show your `php --ri swoole`)?
swoole
Swoole => enabled
Author => <NAME> <<EMAIL>>
Version => 4.4.9-alpha
Built => Oct 22 2019 09:51:51
coroutine => enabled
epoll => enabled
eventfd => enabled
signalfd => enabled
cpu_affinity => enabled
spinlock => enabled
rwlock => enabled
openssl => OpenSSL 1.1.0l 10 Sep 2019
http2 => enabled
zlib => enabled
mutex_timedlock => enabled
pthread_barrier => enabled
futex => enabled
async_redis => enabled
Directive => Local Value => Master Value
[Truncated]
swoole.enable_library => On => On
swoole.enable_preemptive_scheduler => Off => Off
swoole.display_errors => On => On
swoole.use_shortname => On => On
swoole.unixsock_buffer_size => 8388608 => 8388608
(it was built from Master)
PHP 7.4.0RC4
5. What is your machine environment used (including version of kernel & php & gcc) ?
Linux LOCAL_DEV_SWOOLE 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 GNU/Linux
PHP 7.4.0RC4 (cli) (built: Oct 18 2019 11:40:42) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0-dev, Copyright (c) Zend Technologies
with Zend OPcache v7.4.0RC4, Copyright (c), by Zend Technologies
gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
Answers:
username_1: static_handler only support HTTP1.1 now
username_2: fixed
Status: Issue closed
username_0: Thank you for fixing this!
Now the server can start with both HTTP2 and static handler but in this mode it doesnt return the correct header for content type. Accessing /someimage.png returns content-type: "text/html" instead of the correct "image/png"
username_2: Missing content type, resolved |
cul-it/signage | 409943641 | Title: LibCal hours - check if still open from yesterday
Question:
username_0: For those early morning closing times (past `11:59:59 pm`), which actually fall in the next day. Once 12am hits, the LibCal API call for hours requests the following day, which would result in an inaccurate status of closed.
##### Background/History
This check was [dropped from the legacy app](https://github.com/cul-it/circ-display/pull/39) after switching to LibCal for hours since the LibCal API `api_hours_date` endpoint initially only supported future dates (or today).
When revisiting this today I noticed it now accepts dates in the past, although it's not clear how far back. Initial testing points to maybe only the current week (as of today, Wednesday `2/13/19`, requests for dates prior to Sunday `2/10/19` return null) but if that's the case how do we handle the Saturday to Sunday transition? Stay tuned...
Answers:
username_0: Morning @username_1. Thanks again for adding those late night (early morning) bookings to the CoLab. That really helped me iron out some of the final details and I think we're in pretty good shape.
I would like to test one more edge case. Will you please add several more bookings for the next couple days? Here's what I'm hoping to cover:
* bookings that stop or start at midnight
* bookings that cross over midnight
username_1: Here are the bookings I made, please let me know if additional bookings would help, they only take a minute to create.
12:00am Wednesday, March 20, 2019 - 12:45am Wednesday, March 20, 2019
11:00pm Wednesday, March 20, 2019 - 12:00am Thursday, March 21, 2019
11:30pm Thursday, March 21, 2019 - 12:30am Friday, March 22, 2019
username_0: Thanks Devin.
If it's not too much trouble, do you mind moving the midnight overlapping booking to tonight since this is the one I expect will be the most challenging? Sorry I wasn't clear about that.
Also we could probably adjust the other two bookings (start/end at midnight) to be consecutive on Wed night into Thurs morning so that we can have each of our conditions/cases covered by Thursday morning 😄.
username_1: Got it, it should be better now!
username_0: Perfect....thank you! And of course the overlapping booking already shows it will be a challenge 😉
username_0: One more thing before I forget...
I was able to confirm [my original suspicion from above](https://github.com/cul-it/signage/issues/89#issue-409943641) about requesting hours for dates in the past via the LibCal API.
I've created #115 to provide more details and a proposed solution. Please review and comment there if applicable.
username_0: Morning @username_1.
I've refined the logic around early morning closings to address these edge cases and I feel like we've covered all known conditions.
Feel free to take the latest and greatest for a spin on the [staging instance](https://signage-stg-pr-111.herokuapp.com/olin/spaces/colab) and please let me know when you'd like to go live with the Olin CoLab iPad so I can deploy to production.
username_1: I tested a few more things and noticed some possible bugs:
The page doesn't seem to refresh automatically. I'm not sure if this is an artifact of the staging environment.
I added several appointments with gaps and it seems to only account for one combination of starting time and ending time (e.g. the room shows unavailable at 6:15pm and available again at 8:15pm for the schedule below)
6:15pm Thursday, March 21, 2019 - 7:00pm Thursday, March 21, 2019
7:45pm Thursday, March 21, 2019 - 8:15pm Thursday, March 21, 2019
9:15pm Thursday, March 21, 2019 - 9:30pm Thursday, March 21, 2019
11:45pm Thursday, March 21, 2019 - 12:30am Friday, March 22, 2019
username_0: Hi Devin.
Thanks for the feedback. Quick response to your two potential bugs:
1. I'm not able to reproduce. Staging instance is refreshing date on expected intervals. Can you provide additional details on what makes you think that this is not happening on your end?
1. Definitely a bug, introduced in an enhancement implemented after my comment from this morning (#116). Thanks for catching this 👍 I see what's up and will submit a patch shortly.
username_0: Okay Devin...[patch has been applied for your second issue](https://github.com/cul-it/signage/pull/117) and [review app has been spun up](https://signage-stg-pr-117.herokuapp.com/olin/spaces/colab).
Thanks again. Looking forward to hearing from you.
username_1: 1. The clock in the upper right doesn't update, it still says 4:32 and it's now 4:50.
2. That fox looks good to me, thank you!
username_0: Thanks for clarifying on number 1. Is this on the iPad? Using the Kiosk app or just in Safari? Any details you can provide will help. I'm still unable to reproduce locally on my laptop 😄
username_0: Quick update...just ran to get my iPad and **I am able to reproduce** your observed behavior (time is not refreshing).
It's not immediately obvious to me why the underlying code behind the refresh isn't triggering in the older mobile Safari browser, but I will take a closer look tomorrow and let you know what I find.
username_1: Sorry, yes, this is the built in version of Safari on the iPad (iOS 9.3.5, the latest available for this iPad 2) and the Kiosk app appears to match (in this case I tested from 5:08 until 5:16)
username_1: I appreciate it, thank you!
username_0: Morning @username_1. We should be back in business with the older iPads refreshing content (#118). Please let me know if you find otherwise.
username_1: Looks good, the page refreshes, the clock updates, and new appointments "appear" on their own. I think we're ready to go!
username_0: Great...I'll get everything merged and let you know once I've deployed to production.
Status: Issue closed
username_0: Deployed to production. Thanks for the huge assist, @username_1 🤗
https://signage.library.cornell.edu/olin/spaces/colab |
borgbackup/borg | 308297259 | Title: do back/forward ports to other branches (2)
Question:
username_0: https://github.com/borgbackup/borg/issues?utf8=%E2%9C%93&q=label%3Abackport%2F1.0-maint
https://github.com/borgbackup/borg/issues?utf8=%E2%9C%93&q=label%3Abackport%2F1.1-maint
do 3 simple/small or 1 bigger/complex backport from there.
---
:moneybag: [there is a bounty for this]()
Answers:
username_0: Working on:
#3733
#3739 / #3742
username_0: Fixed by PR #3756 .
Status: Issue closed
|
djtu-zcx/algrothm | 557941122 | Title: [2020-1-31]Generate Parentheses
Question:
username_0: ### Generate Parentheses Description
给出 n 代表生成括号的对数,请你写出一个函数,使其能够生成所有可能的并且有效的括号组合。
例如,给出 n = 3,生成结果为:
```
[
"((()))",
"(()())",
"(())()",
"()(())",
"()()()"
]
```
### solution
只有在我们知道序列仍然保持有效时才添加 '(' or ')',而不是像方法一那样每次添加。我们可以通过跟踪到目前为止放置的左括号和右括号的数目来做到这一点,
如果我们还剩一个位置,我们可以开始放一个左括号。 如果它不超过左括号的数量,我们可以放一个右括号。
```golang
var strRet = []string{}
func generateParenthesis(n int) []string {
strRet = []string{}
generate("",0,0,n)
return strRet
}
func generate(cur string,start,end,max int){
if len(cur) == 2*max{
strRet = append(strRet,cur)
return
}
if start < max{
generate(cur+"(",start+1,end,max)
}
if end < start{
generate(cur+")",start,end+1,max)
}
}
```
### 吐槽
本来想着二进制应该可以做,但是做了我3个小时,放弃,用这种递归的方案吧 |
google/ExoPlayer | 533776140 | Title: Play Offline
Question:
username_0: Hey guys. I faced a problem when playing video downloaded at offline mode:
- I've downloaded video to downloads directory
- Play video at offline mode( no wifi, no 3g,4g..)
On LogCat bellow
```
D/OpenGLRenderer: eglCreateWindowSurface = 0x71f76e6080, 0x71f1bbf010
D/SurfaceView: show() Surface(name=SurfaceView - <myPackageName>/<myPackageName>.activities.PlayerActivity@f690f8b@0[19081])/@0x2fa2868 android.view.SurfaceView{f690f8b V.E...... ......ID 0,0-1080,1836}
D/SurfaceView: surfaceCreated 1 #8 android.view.SurfaceView{f690f8b V.E...... ......ID 0,0-1080,1836}
D/SurfaceView: surfaceChanged (1080,1836) 1 #8 android.view.SurfaceView{f690f8b V.E...... ......ID 0,0-1080,1836}
D/ViewRootImpl@4e50549[PlayerActivity]: Relayout returned: old=[0,0][1080,2220] new=[0,0][1080,2220] result=0x3 surface={valid=true 489386930176} changed=false
D/ViewRootImpl@4e50549[PlayerActivity]: MSG_RESIZED: frame=Rect(0, 0 - 1080, 2220) ci=Rect(0, 72 - 0, 144) vi=Rect(0, 72 - 0, 144) or=1
D/ViewRootImpl@4e50549[PlayerActivity]: MSG_WINDOW_FOCUS_CHANGED 1 1
D/InputMethodManager: prepareNavigationBarInfo() DecorView@c95ae4e[PlayerActivity]
D/InputMethodManager: getNavigationBarColor() -855310
D/InputMethodManager: prepareNavigationBarInfo() DecorView@c95ae4e[PlayerActivity]
getNavigationBarColor() -855310
V/InputMethodManager: Starting input: tba=<myPackageName> ic=null mNaviBarColor -855310 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager: startInputInner - Id : 0
I/InputMethodManager: startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport: Input channel constructed: fd=108
Input channel destroyed: fd=92
I/System.out: (HTTPLog)-Static: isSBSettingEnabled false
(HTTPLog)-Static: isSBSettingEnabled false
I/System.out: (HTTPLog)-Static: isSBSettingEnabled false
(HTTPLog)-Static: isSBSettingEnabled false
I/ACodec: [] Now uninitialized
I/ACodec: [] onAllocateComponent
I/OMXClient: IOmx service obtained
I/ACodec: [OMX.Exynos.avc.dec.secure] Now Loaded
D/SurfaceUtils: connecting to surface 0x71e52d5010, reason connectToSurface
I/MediaCodec: [OMX.Exynos.avc.dec.secure] setting surface generation to 19538949
D/SurfaceUtils: disconnecting from surface 0x71e52d5010, reason connectToSurface(reconnect)
D/SurfaceUtils: connecting to surface 0x71e52d5010, reason connectToSurface(reconnect)
I/ACodec: [HW_HDR] app-pid : 19081
W/DirectStreamingProxy: app-pid not found. use getpid(). pid = 19081
D/DirectStreamingProxy: pid = 19081
I/ACodec: can't find wfdsink-exynos-enable
I/SmartFittingClass: Create SmartFitting Version 2.0
I/SmartFittingClass: Init, [State:UNINITIALIZED] pid: 19081
I/ACodec: codec does not support config priority (err -1010)
I/ACodec: [OMX.Exynos.avc.dec.secure] Now Loaded->Idle
I/SmartFittingClass: InitialCheck()
Create SmartFittingManagerServiceProxy!!
SmartFittingManagerServiceProxy::init
Create SmartFittingManagerServiceProxy::EventHandler
SmartFittingManagerServiceProxy::onAddSmartFittingListener pid: 19081
Create SmartFittingListener
I/SmartFittingClass: InitialCheck, WhiteListStatus returned from CodecSolution : 0
InitialCheck, [State:FINISHED] SmartFitting has not been activated by App. Shut Down SmartFitting.
I/SurfaceUtils: setNativeWindowSizeFormatAndUsage isNormalDrm 1 value 0
D/SurfaceUtils: set up nativeWindow 0x71e52d5010 for 1920x1080, color 0x123, rotation 0, usage 0x10606900
D/ACodec: [OMX.Exynos.avc.dec.secure] setting nBufferCountActual to 9 failed: -22
D/ACodec: [OMX.Exynos.avc.dec.secure] setting nBufferCountActual to 8 failed: -22
D/ACodec: [OMX.Exynos.avc.dec.secure] setting nBufferCountActual to 7 failed: -22
I/ACodec: [OMX.Exynos.avc.dec.secure] configureOutputBuffersFromNativeWindow setBufferCount : 6, minUndequeuedBuffers : 2
I/MediaCodec: setCodecState state : 0
I/ACodec: [OMX.Exynos.avc.dec.secure] Now Idle->Executing
I/ACodec: [OMX.Exynos.avc.dec.secure] Now Executing
I/ACodec: [OMX.Exynos.avc.dec.secure] calling emptyBuffer 1 w/ codec specific data, size : 34
W/MapperHal: buffer descriptor with invalid usage bits 0x202000
[Truncated]
I/SmartFittingClass: ShutDownSmartFitting!!
SmartFittingManagerServiceProxy::Deinit
I/SmartFittingClass: Destroy SmartFitting!!
I/SmartFittingClass: Destroy ~SmartFittingManagerServiceProxy::EventHandler
Destroy SmartFittingManagerServiceProxy!!
I/SmartFittingClass: Destroy SmartFittingListener
I/ACodec: [OMX.Exynos.avc.dec.secure] Now uninitialized
I/ACodec: [] Now kWhatShutdownCompleted event : 8531
I/MediaCodec: Codec shutdown complete
I/ACodec: [OMX.google.aac.decoder] Now Executing->Idle
I/ACodec: [OMX.google.aac.decoder] Now Idle->Loaded
I/ACodec: [OMX.google.aac.decoder] Now Loaded
[OMX.google.aac.decoder] Now kWhatShutdownCompleted event : 8531
I/ACodec: [OMX.google.aac.decoder] Now uninitialized
I/ACodec: [] Now kWhatShutdownCompleted event : 8531
I/MediaCodec: Codec shutdown complete
```
Has 1 error print : `E/ExoPlayerImplInternal: Playback error.`
I don't know why. On project demo of ExoPlayer project, video downloaded still can play good.
Somebody help me. Thanks all <3
Answers:
username_1: Please provide complete information as requested in the issue template. The issue template can be found [here](https://github.com/google/ExoPlayer/blob/release-v2/.github/ISSUE_TEMPLATE/bug.md). If you're unable to share bug reports or test content publicly, please send them to <EMAIL> using a subject in the format "Issue #1234", where "#1234" should be replaced with your issue number. Please also update this issue to indicate you’ve done this.
Specially important:
- Sample media.
- Bugreport.
Status: Issue closed
|
pypa/pip | 758941605 | Title: [PyPy] "pip install --help" fails with TypeError: 'list' objects are unhashable
Question:
username_0: <!--
If you're reporting an issue for `--use-feature=2020-resolver`, use the "Dependency resolver failures / errors" template instead.
-->
**Environment**
* pip version: 20.3.1
* Python version: PyPy 7.3.2 (Python 2.7.13)
* OS: Ubuntu 16.04
<!-- Feel free to add more information about your environment here -->
**Description**
On pypy with pip 20.3.1 "pip install --help" fails with TypeError: 'list' objects are unhashable
**Expected behavior**
"pip install --help" should print the help message.
**How to Reproduce**
<!-- Describe the steps to reproduce this bug. -->
1. Create a virtualenv with PyPy 2
2. Update pip: `python -m pip install --upgrade setuptools wheel`
3. Run `python -m pip install --help`
3. A `TypeError: 'list' objects are unhashable` error occurs (see below).
**Output**
```
ubuntu@ip-10-122-25-114:~$ createvirtualenv /opt/python/pypy/bin/pypy venvpypy
+ createvirtualenv /opt/python/pypy/bin/pypy venvpypy
+ PYTHON=/opt/python/pypy/bin/pypy
+ VENVPATH=venvpypy
+ /opt/python/pypy/bin/pypy -m virtualenv --version
virtualenv 20.0.33 from /opt/python/pypy/site-packages/virtualenv/__init__.pyc
+ VIRTUALENV='/opt/python/pypy/bin/pypy -m virtualenv --never-download'
+ /opt/python/pypy/bin/pypy -m virtualenv --never-download venvpypy
created virtual environment PyPy2.7.13.final.42-64 in 220ms
creator PyPy2Posix(dest=/home/ubuntu/venvpypy, clear=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/ubuntu/.local/share/virtualenv)
added seed packages: pip==20.2.3, setuptools==44.1.1, wheel==0.35.1
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator
+ '[' Windows_NT = '' ']'
+ . venvpypy/bin/activate
++ '[' venvpypy/bin/activate = -bash ']'
++ deactivate nondestructive
++ unset -f pydoc
++ '[' -z '' ']'
++ '[' -z '' ']'
++ '[' -n /bin/bash ']'
++ hash -r
++ '[' -z '' ']'
++ unset VIRTUAL_ENV
++ '[' '!' nondestructive = nondestructive ']'
++ VIRTUAL_ENV=/home/ubuntu/venvpypy
++ export VIRTUAL_ENV
++ _OLD_VIRTUAL_PATH=/opt/go/bin:/opt/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/node/bin:/opt/node/bin:/home/ubuntu/cli_bin
++ PATH=/home/ubuntu/venvpypy/bin:/opt/go/bin:/opt/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/node/bin:/opt/node/bin:/home/ubuntu/cli_bin
++ export PATH
++ '[' -z '' ']'
[Truncated]
File "/opt/python/pypy/lib-python/2.7/optparse.py", line 1085, in format_help
result.append(self.format_option_help(formatter))
File "/opt/python/pypy/lib-python/2.7/optparse.py", line 1074, in format_option_help
result.append(formatter.format_option(option))
File "/opt/python/pypy/lib-python/2.7/optparse.py", line 316, in format_option
help_text = self.expand_default(option)
File "/home/ubuntu/venvpypy/site-packages/pip/_internal/cli/parser.py", line 123, in expand_default
default_value, redact_auth_from_url(default_value))
File "/home/ubuntu/venvpypy/site-packages/pip/_internal/utils/misc.py", line 826, in redact_auth_from_url
return _transform_url(url, _redact_netloc)[0]
File "/home/ubuntu/venvpypy/site-packages/pip/_internal/utils/misc.py", line 786, in _transform_url
purl = urllib_parse.urlsplit(url)
File "/opt/python/pypy/lib-python/2.7/urlparse.py", line 176, in urlsplit
cached = _parse_cache.get(key, None)
TypeError: 'list' objects are unhashable
(venvpypy) ubuntu@ip-10-122-25-114:~$ python --version
+ python --version
Python 2.7.13 (6abe2e00c51d, Sep 23 2020, 05:06:33)
[PyPy 7.3.2 with GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
```
Answers:
username_1: Hi, can you try if #9207 fixes the issue?
username_0: Yes this seems to be fixed in master:
```
(venvpypy) ubuntu@ip-10-122-25-114:~$ pip install --upgrade https://github.com/pypa/pip/archive/master.tar.gz
DEPRECATION: pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
Looking in indexes: https://artifactory.corp.mongodb.com/artifactory/api/pypi/pypi/simple, https://pypi.org/simple
Collecting https://github.com/pypa/pip/archive/master.tar.gz
Downloading https://github.com/pypa/pip/archive/master.tar.gz
/ 8.8 MB 10.2 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Building wheels for collected packages: pip
Building wheel for pip (PEP 517) ... done
Created wheel for pip: filename=pip-21.0.dev0-py2.py3-none-any.whl size=1519799 sha256=c546a19befe9b9ecca89fe72e55478349a139beb9c8f046d780655c6f890d9a4
Stored in directory: /data/tmp/pip-ephem-wheel-cache-1usjXk/wheels/fd/ba/0a/a283cf6fd417b712631848e460eb636624b69f04c8752894dd
Successfully built pip
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 20.3.1
Uninstalling pip-20.3.1:
Successfully uninstalled pip-20.3.1
Successfully installed pip-21.0.dev0
(venvpypy) ubuntu@ip-10-122-25-114:~$ pip install --help
Usage:
pip install [options] <requirement specifier> [package-index-options] ...
pip install [options] -r <requirements file> [package-index-options] ...
pip install [options] [-e] <vcs project url> ...
```
Thanks.
username_2: Thanks for confirming @username_0! I'll close this, since the fix has been merged and will be in the upcoming pip 20.3.2 release.
username_3: I'm pretty sure that this should be closed as the fix has been available in a release for a little bit now. Looks like username_2 forgot about this issue so just a gentle ping :)
username_1: Thanks for the heads-up!
Status: Issue closed
|
MicrosoftDocs/msteams-docs | 1177567175 | Title: New-Team limited templates
Question:
username_0: We are in an educational institution so have need to use the EDU templates. The template that is probably second most important is the Staff template. This is available using the Graph API (https://graph.microsoft.com/v1.0/
teamsTemplates('educationStaff')) but not from PowerShell, which I believe uses Graph. Currently, we have to provision these manually. Not an easy feat when you are creating upwards of 100 Teams each semester.
Any idea of when all the templates available to the API will be available to PowerShell? |
baserproject/basercms | 854489725 | Title: bc_sampleを少し模様替え
Question:
username_0: ## 概要


脱・素材集。たまには話のネタになりそうなことをやってみると面白いかな?と思いました。
誰が撮った写真とか、リリース記事にも少し書けそうに思います。
Answers:
username_1: @gondoh @username_2 僕はよいと思いますがいかがでしょう?
username_0: 感触よさそうならメンバーに声をかけて他の写真も新調します。
マーケットの作者さん達にも声をかけてリリースのたびにやると楽しいかも
username_2: こちらとても良いと思います!
username_0: @username_1 @username_2 @gondoh
すいません、使うつもりだった写真を確保してませんでした。
考えていたことについて説明しますね。
- プログラミングができない人でもGitHubを通じて参加できる
- リリースのたびに写真を新しくする
- こないだみたいに写真ネタのイベントを開催して素材を集めるもよし
面白い写真があればリリースの際に話題にすることもできると思います。
話題性だけでなく、いろんな人がbaserCMSの開発に参加するきっかけを持つことが
できるので面白いのではないかなと思いました。
https://desktop.github.com/
非エンジニアな人には上記ツールがおすすめです。 |
tidyverse/googledrive | 1159792860 | Title: drive_auth(path = 'service_account_path.json') issue on R 4.1.2
Question:
username_0: After upgrading from R 3.6.3 to R 4.1.2, I am now unable to use `drive_auth` non-interactively. Below is my `sessionInfo()`:
```
R version 4.1.2 (2021-11-01)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Big Sur 11.6.4
Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] rvest_1.0.2 httr_1.4.2 shinybusy_0.3.0 jsonlite_1.8.0 googledrive_2.0.0 stringr_1.4.0 purrr_0.3.4
[8] shinyjs_2.1.0 shinyWidgets_0.6.4 shiny_1.7.1 ggplot2_3.3.5 DT_0.21 dplyr_1.0.8 data.table_1.14.2
loaded via a namespace (and not attached):
[1] Rcpp_1.0.8 jquerylib_0.1.4 bslib_0.3.1 pillar_1.7.0 compiler_4.1.2 later_1.3.0 tools_4.1.2
[8] digest_0.6.29 gargle_1.2.0 lifecycle_1.0.1 tibble_3.1.6 gtable_0.3.0 pkgconfig_2.0.3 rlang_1.0.1
[15] rstudioapi_0.13 cli_3.2.0 curl_4.3.2 fastmap_1.1.0 xml2_1.3.3 withr_2.5.0 rappdirs_0.3.3
[22] fs_1.5.2 sass_0.4.0 generics_0.1.2 vctrs_0.3.8 htmlwidgets_1.5.4 grid_4.1.2 tidyselect_1.1.2
[29] glue_1.6.2 R6_2.5.1 fansi_1.0.2 magrittr_2.0.2 scales_1.1.1 promises_1.2.0.1 ellipsis_0.3.2
[36] htmltools_0.5.2 mime_0.12 colorspace_2.0-3 xtable_1.8-4 httpuv_1.6.5 utf8_1.2.2 stringi_1.7.6
[43] munsell_0.5.0 crayon_1.5.0
```
Answers:
username_0: I identified the issue. There was an issue installing `openssl`, so I needed to run `install.packages("openssl")`
username_1: I'm having a similar issue any idea what the least amount of dependencies are to successfully use `drive_auth` non-interactively while using `path = servive_account.json`
```
R version 4.0.0 (2020-04-24)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] openssl_2.0.0 googlesheets4_1.0.0 googledrive_2.0.0 shinyWidgets_0.6.4 bs4Dash_2.0.3 shiny_1.7.1
loaded via a namespace (and not attached):
[1] Rcpp_1.0.8 cellranger_1.1.0 bslib_0.3.1 compiler_4.0.0 pillar_1.7.0 later_1.3.0 jquerylib_0.1.4 tools_4.0.0 digest_0.6.29 jsonlite_1.8.0
[11] lifecycle_1.0.1 gargle_1.2.0 tibble_3.1.6 pkgconfig_2.0.3 rlang_1.0.2 cli_3.1.0 DBI_1.1.2 yaml_2.2.1 fastmap_1.1.0 dplyr_1.0.8
[21] askpass_1.1 generics_0.1.2 fs_1.5.2 vctrs_0.3.8 sass_0.4.0 tidyselect_1.1.2 glue_1.6.2 R6_2.5.1 fansi_1.0.2 purrr_0.3.4
[31] magrittr_2.0.2 promises_1.2.0.1 ellipsis_0.3.2 htmltools_0.5.2 assertthat_0.2.1 mime_0.12 xtable_1.8-4 httpuv_1.6.5 utf8_1.2.2 crayon_1.5.0
```
I assume you were given the same message below
```
The googledrive package is requesting access to your Google account.
Select a pre-authorised account or enter '0' to obtain a new token.
Press Esc/Ctrl + C to cancel.
``` |
tobydragon/PAR | 468239821 | Title: Migrate functions from old Front End JS files to new files
Question:
username_0: Done when: QuestionPage and QuestionPageLibrary have had their functionality migrated to any associated OOP JS files in the front end
Answers:
username_0: 3 hours, migrating functions over and writing new implementation of functions to fit better with new architecture schema
username_1: 5 hours
Status: Issue closed
|
rust-lang/rust | 437960239 | Title: unused_parens incorrectly lints on `if let true = (false && true) {}`
Question:
username_0: ```rust
warning: unnecessary parentheses around `if let` head expression
--> src/lib.rs:2:19
|
2 | if let true = (false && true) {}
| ^^^^^^^^^^^^^^^ help: remove these parentheses
|
= note: #[warn(unused_parens)] on by default
```
If we remove the parens we correctly get:
```rust
error: ambiguous use of `&&`
--> src/lib.rs:2:19
|
2 | if let true = false && true {}
| ^^^^^^^^^^^^^ help: consider adding parentheses: `(false && true)`
|
= note: this will be a error until the `let_chains` feature is stabilized
= note: see rust-lang/rust#53668 for more information
```
cc #53668
The lint should take `let_chains` into account.
Answers:
username_0: This is a rather obscure bug which no one seems to have hit thus far... I only found it due to working on `let_chains`. I'll fix the issue as part of this work.
Status: Issue closed
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.