-
Notifications
You must be signed in to change notification settings - Fork 0
/
stack-unseen-2.jsonl
194 lines (194 loc) · 680 KB
/
stack-unseen-2.jsonl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
{"questionId":"a954f0f39574e97ecc94cd159b66a25e4251ad276a1fda59564511d1216cf725","question":"Visual Studio Code crashes with [...ERROR:process\\_memory\\_range.cc(75)] read out of range\nI am using an Ubuntu machine and when I open Visual Studio Code, it opens for a second or so and then crashes. When starting Visual Studio Code (executable `code`) through the terminal with the command `code --verbose`, I get the following error.\n\n\n\n```\n[19814:0606\/134456.415221:ERROR:gpu_process_host.cc(993)] GPU process exited unexpectedly: exit_code=133\n[19814:0606\/134456.415235:WARNING:gpu_process_host.cc(1364)] The GPU process has crashed 6 time(s)\n[19814:0606\/134456.415243:FATAL:gpu_data_manager_impl_private.cc(448)] GPU process isn't usable. Goodbye.\n[0606\/134456.419486:ERROR:process_memory_range.cc(75)] read out of range\n[0606\/134456.419494:ERROR:process_memory_range.cc(75)] read out of range\n...\n\n```\n\nHere is part of the error message.\n\n\nI also tried to remove code and reinstall it. But I still get an error. How I can fix this?\n\n\nTo remove, I used `sudo snap remove vscode` & `cd ~ && rm -rf .vscode && rm -rf .config\/Code`.","questionMetadata":{"type":"version","tag":"other","level":"intermediate"},"answer":"There are solutions: *[Visual Studio Code crashes at startup with 1.86 version on Ubuntu 22.04 #204159](https:\/\/github.com\/microsoft\/vscode\/issues\/204159)*\n\n\nThis one worked for me:\n\n\n\n```\nsudo snap revert code --revision 159"}
{"questionId":"cc2b8a1487ffc7a284bdd7308bc817d332bd3e86afde75fa9977315e18bb123f","question":"Manifest merger failed with AGP 8.3.0\nI'm trying to upgrade my project to AGP 8.3.0 but I'm getting the error:\n\n\n\n```\nAttribute property#android.adservices.AD_SERVICES_CONFIG@resource value=(@xml\/ga_ad_services_config) from [com.google.android.gms:play-services-measurement-api:21.5.1] AndroidManifest.xml:32:13-58\n is also present at [com.google.android.gms:play-services-ads-lite:22.6.0] AndroidManifest.xml:92:13-59 value=(@xml\/gma_ad_services_config).\n Suggestion: add 'tools:replace=\"android:resource\"' to <property> element at AndroidManifest.xml to override.\n\n```\n\nAny idea on how to fix this?\n\n\nI don't have anything related to `AD_SERVICES_CONFIG` in my manifest file.","questionMetadata":{"type":"version","tag":"other","level":"intermediate"},"answer":"Problem is definitely not with your `AndroidManifest.xml` file, but rather the files that are bundled with the external Google services libraries that you have implemented in your application. Yes, sometimes they may conflict with each other (as all manifests are merged during app build) and this seems to be the case.\n\n\nEventually, Google will address this issue, but for now you can either downgrade your dependencies (need to figure out which ones, but AGP 8.2.2 didn't have this problem AFAIK).\n\n\nOr, just do as suggested by the error log, and solve the conflict by adding this block to your `AndroidManifest.xml` file:\n\n\n\n```\n<manifest\n ...\n\n <application\n ...\n\n <property\n android:name=\"android.adservices.AD_SERVICES_CONFIG\"\n android:resource=\"@xml\/gma_ad_services_config\"\n tools:replace=\"android:resource\" \/>\n \n ...\n <\/application>\n\n ...\n<\/manifest>\n\n```\n\n**Note:** I would still recommend going back to AGP 8.2.2 if your project is important, since new releases are always risky and this might not be the only problem in the updated Gradle plugin"}
{"questionId":"41e7d95dcded069cad042a948f7d2f3b929e7f0f62dd99b326f2ee3ab9919700","question":"java.lang.IllegalStateException: CompositionLocal LocalLifecycleOwner not present\nI get an `java.lang.IllegalStateException: CompositionLocal LocalLifecycleOwner` not present error when I `collectAsState()` or `collectAsStateWithLifecycle()`. I do not know what is wrong. This previously worked, however since I made a switch and some dependencies update it stopped working. The error is as follows:\n\n\n\n```\njava.lang.IllegalStateException: CompositionLocal LocalLifecycleOwner not present\nat androidx.lifecycle.compose.LocalLifecycleOwnerKt$LocalLifecycleOwner$1.invoke(LocalLifecycleOwner.kt:26)\nat androidx.lifecycle.compose.LocalLifecycleOwnerKt$LocalLifecycleOwner$1.invoke(LocalLifecycleOwner.kt:25)\nat kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)\nat androidx.compose.runtime.LazyValueHolder.getCurrent(ValueHolders.kt:29)\nat androidx.compose.runtime.LazyValueHolder.getValue(ValueHolders.kt:31)\nat androidx.compose.runtime.CompositionLocalMapKt.read(CompositionLocalMap.kt:90)\nat androidx.compose.runtime.ComposerImpl.consume(Composer.kt:2135)\nat androidx.lifecycle.compose.FlowExtKt.collectAsStateWithLifecycle(FlowExt.kt:182)\nat com.codejockie.wani.MainActivity$onCreate$1.invoke(MainActivity.kt:47)\nat com.codejockie.wani.MainActivity$onCreate$1.invoke(MainActivity.kt:45)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:109)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:35)\nat androidx.compose.ui.platform.ComposeView.Content(ComposeView.android.kt:428)\nat androidx.compose.ui.platform.AbstractComposeView$ensureCompositionCreated$1.invoke(ComposeView.android.kt:252)\nat androidx.compose.ui.platform.AbstractComposeView$ensureCompositionCreated$1.invoke(ComposeView.android.kt:251)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:109)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:35)\nat androidx.compose.runtime.CompositionLocalKt.CompositionLocalProvider(CompositionLocal.kt:228)\nat androidx.compose.ui.platform.CompositionLocalsKt.ProvideCommonCompositionLocals(CompositionLocals.kt:186)\nat androidx.compose.ui.platform.AndroidCompositionLocals_androidKt$ProvideAndroidCompositionLocals$3.invoke(AndroidCompositionLocals.android.kt:119)\nat androidx.compose.ui.platform.AndroidCompositionLocals_androidKt$ProvideAndroidCompositionLocals$3.invoke(AndroidCompositionLocals.android.kt:118)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:109)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:35)\nat androidx.compose.runtime.CompositionLocalKt.CompositionLocalProvider(CompositionLocal.kt:228)\nat androidx.compose.ui.platform.AndroidCompositionLocals_androidKt.ProvideAndroidCompositionLocals(AndroidCompositionLocals.android.kt:110)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1$1$2.invoke(Wrapper.android.kt:139)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1$1$2.invoke(Wrapper.android.kt:138)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:109)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:35)\nat androidx.compose.runtime.CompositionLocalKt.CompositionLocalProvider(CompositionLocal.kt:248)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1$1.invoke(Wrapper.android.kt:138)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1$1.invoke(Wrapper.android.kt:123)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:109)\nat androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:35)\nat androidx.compose.runtime.ActualJvm_jvmKt.invokeComposable(ActualJvm.jvm.kt:90)\nat androidx.compose.runtime.ComposerImpl.doCompose(Composer.kt:3302)\nat androidx.compose.runtime.ComposerImpl.composeContent$runtime_release(Composer.kt:3235)\nat androidx.compose.runtime.CompositionImpl.composeContent(Composition.kt:725)\nat androidx.compose.runtime.Recomposer.composeInitial$runtime_release(Recomposer.kt:1071)\nat androidx.compose.runtime.CompositionImpl.composeInitial(Composition.kt:633)\nat androidx.compose.runtime.CompositionImpl.setContent(Composition.kt:619)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1.invoke(Wrapper.android.kt:123)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1.invoke(Wrapper.android.kt:114)\nat androidx.compose.ui.platform.AndroidComposeView.setOnViewTreeOwnersAvailable(AndroidComposeView.android.kt:1289)\nat androidx.compose.ui.platform.WrappedComposition.setContent(Wrapper.android.kt:114)\nat androidx.compose.ui.platform.WrappedComposition.onStateChanged(Wrapper.android.kt:164)\nat androidx.lifecycle.LifecycleRegistry$ObserverWithState.dispatchEvent(LifecycleRegistry.jvm.kt:320)\nat androidx.lifecycle.LifecycleRegistry.addObserver(LifecycleRegistry.jvm.kt:198)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1.invoke(Wrapper.android.kt:121)\nat androidx.compose.ui.platform.WrappedComposition$setContent$1.invoke(Wrapper.android.kt:114)\nat androidx.compose.ui.platform.AndroidComposeView.onAttachedToWindow(AndroidComposeView.android.kt:1364)\nat android.view.View.dispatchAttachedToWindow(View.java:22257)\nat android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3494)\nat android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3501)\nat android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3501)\nat android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3501)\nat android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3501)\nat android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3501)\nat android.view.ViewGroup.dispatchAttachedToWindow(ViewGroup.java:3501)\nat android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:3207)\nat android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:2659)\nat android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:9789)\nat android.view.Choreographer$CallbackRecord.run(Choreographer.java:1399)\nat android.view.Choreographer$CallbackRecord.run(Choreographer.java:1408)\nat android.view.Choreographer.doCallbacks(Choreographer.java:1008)\nat android.view.Choreographer.doFrame(Choreographer.java:938)\nat android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:1382)\nat android.os.Handler.handleCallback(Handler.java:959)\nat android.os.Handler.dispatchMessage(Handler.java:100)\nat android.os.Looper.loopOnce(Looper.java:232)\nat android.os.Looper.loop(Looper.java:317)\nat android.app.ActivityThread.main(ActivityThread.java:8501)\nat java.lang.reflect.Method.invoke(Native Method)\nat com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:552)\nat com.android.internal.os.ZygoteInit.main(ZygoteInit.java:878)\n\n```\n\nNot sure this is a problem but I recently switched from an Intel MacBook to an M1.\nCloned my project and continued work where I left off then updated the dependencies since Android studio gave a hints there were updates.\n\n\nMy dependencies are as follows:\n\n\n\n```\ndependencies {\n implementation(\"androidx.core:core-ktx:1.13.1\")\n implementation(\"androidx.lifecycle:lifecycle-runtime-ktx:2.8.0\")\n implementation(\"androidx.activity:activity-compose:1.9.0\")\n implementation(platform(\"androidx.compose:compose-bom:2024.05.00\"))\n implementation(\"androidx.compose.foundation:foundation\")\n implementation(\"androidx.compose.ui:ui\")\n implementation(\"androidx.compose.ui:ui-text\")\n implementation(\"androidx.compose.ui:ui-graphics\")\n implementation(\"androidx.compose.ui:ui-tooling-preview\")\n implementation(\"androidx.compose.material3:material3\")\n implementation(\"androidx.navigation:navigation-compose:2.7.7\")\n implementation(\"androidx.appcompat:appcompat:1.6.1\")\n \/\/ DataStore\n implementation(\"androidx.datastore:datastore:1.1.1\")\n \/\/ Glide\n implementation(\"com.github.bumptech.glide:compose:1.0.0-beta01\")\n \/\/ Hilt\n implementation(\"com.google.dagger:hilt-android:2.51\")\n ksp(\"com.google.dagger:hilt-android-compiler:2.51\")\n annotationProcessor(\"com.google.dagger:hilt-android:2.51\")\n implementation(\"androidx.hilt:hilt-navigation-compose:1.2.0\")\n implementation(\"androidx.hilt:hilt-work:1.2.0\")\n ksp(\"androidx.hilt:hilt-compiler:1.2.0\")\n \/\/ Kotlin Serialization\n implementation(\"org.jetbrains.kotlinx:kotlinx-serialization-json:1.0.1\")\n \/\/ Lifecycle\n implementation(\"androidx.lifecycle:lifecycle-viewmodel-ktx:2.8.0\")\n implementation(\"androidx.lifecycle:lifecycle-extensions:2.2.0\")\n implementation(\"androidx.lifecycle:lifecycle-runtime-compose:2.8.0\")\n \/\/ LiveData\n implementation(\"androidx.compose.runtime:runtime-livedata\")\n \/\/ Media\n implementation(\"androidx.media3:media3-exoplayer:1.3.1\")\n implementation(\"androidx.media3:media3-ui:1.3.1\")\n implementation(\"androidx.media3:media3-session:1.3.1\")\n \/\/ OkHttp\n implementation(platform(\"com.squareup.okhttp3:okhttp-bom:4.11.0\"))\n implementation(\"com.squareup.okhttp3:okhttp\")\n implementation(\"com.squareup.okhttp3:logging-interceptor\")\n \/\/ Protobuf\n implementation(\"com.google.protobuf:protobuf-javalite:3.18.0\")\n \/\/ Retrofit\n implementation(\"com.squareup.retrofit2:retrofit:2.9.0\")\n implementation(\"com.squareup.retrofit2:converter-gson:2.9.0\")\n \/\/ Room\n implementation(\"androidx.room:room-runtime:2.6.1\")\n annotationProcessor(\"androidx.room:room-compiler:2.6.1\")\n ksp(\"androidx.room:room-compiler:2.6.1\")\n implementation(\"androidx.room:room-ktx:2.6.1\")\n \/\/ WorkManager\n implementation(\"androidx.work:work-runtime-ktx:2.9.0\")\n\n \/\/ Test dependencies\n testImplementation(\"junit:junit:4.13.2\")\n androidTestImplementation(\"androidx.test.ext:junit:1.1.5\")\n androidTestImplementation(\"androidx.test.espresso:espresso-core:3.5.1\")\n androidTestImplementation(platform(\"androidx.compose:compose-bom:2024.05.00\"))\n androidTestImplementation(\"androidx.compose.ui:ui-test-junit4\")\n androidTestImplementation(\"androidx.navigation:navigation-testing:2.7.7\")\n androidTestImplementation(\"androidx.work:work-testing:2.9.0\")\n debugImplementation(\"androidx.compose.ui:ui-tooling\")\n debugImplementation(\"androidx.compose.ui:ui-test-manifest\")\n testImplementation(\"androidx.room:room-testing:2.6.1\")\n}\n\n```\n\nMainActivity.kt\n\n\n\n```\nclass MainActivity : AppCompatActivity() {\n private val mainViewModel by viewModels<MainViewModel>()\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n\n\n setContent {\n val context = LocalContext.current\n val uiState by mainViewModel.uiState.collectAsStateWithLifecycle()\n\n App(\n closeApp = { this.finish() },\n appState = uiState,\n )\n }\n }\n}","questionMetadata":{"type":"version","tag":"kotlin","level":"intermediate"},"answer":"That error lines up with [this issue](https:\/\/issuetracker.google.com\/issues\/336842920), due to a mismatch of Compose and Lifecycle versions. You are using Lifecycle `2.8.0`, which *right now* is incompatible with stable Compose versions. Once Compose `1.7.0` becomes stable, you can upgrade to use it with Lifecycle `2.8.0`, and (hopefully) this problem goes away.\n\n\nI recommend dropping back to Lifecycle `2.7.0` from the `2.8.0` that you are using now. Alternatively, you can try [the documented workarounds](https:\/\/issuetracker.google.com\/issues\/336842920#comment8)."}
{"questionId":"2ad92687cacfa8651bfdd49cc7b4a9217ea6da53b01490b20e22d1116821da1e","question":"numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject\nI want to call my Python module from the Matlab. I received the error:\n\n\n\n```\nError using numpy_ops>init thinc.backends.numpy_ops\n\n```\n\nPython Error:\n\n\n\n```\n ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject.\n\n```\n\nThe Python script is as follows\n\n\n\n```\nimport spacy\ndef text_recognizer(model_path, text):\ntry:\n # Load the trained model\n nlp = spacy.load(model_path)\n print(\"Model loaded successfully.\")\n \n # Process the given text\n doc = nlp(text)\n ent_labels = [(ent.text, ent.label_) for ent in doc.ents]\n return ent_labels\n\n```\n\nThe Matlab script is as follows\n\n\n\n```\n% Set up the Python environment\npe = pyenv;\npy.importlib.import_module('final_output');\n\n% Add the directory containing the Python script to the Python path\npath_add = fileparts(which('final_output.py'));\nif count(py.sys.path, path_add) == 0\n insert(py.sys.path, int64(0), path_add);\nend\n% Define model path and text to process\nmodel_path = 'D:\\trained_model\\\\output\\\\model-best';\ntext = 'Roses are red';\n% Call the Python function\npyOut = py.final_output.text_recognizer(model_path, text);\n% Convert the output to a MATLAB cell array\nentity_labels = cell(pyOut);\ndisp(entity_labels);\n\n```\n\nI found one solution to update Numpy, what I did, but nothing changed. I am using Python 3.9 and Numpy version 2.0.0\n\n\nThe error was received when I tried to call the Python module using a Matlab script.\n\n\nHow can I fix the issue?","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"The reason is that `pandas` defines its `numpy` dependency freely as \"anything newer than certain version of numpy\".\nThe problem occured, when `numpy==2.0.0` has been released on June 16th 2024, because it is no longer compatible with your pandas version.\n\n\nThe solution is to pin down the `numpy` version to any before the `2.0.0`. Today it could be (this is the most recent `numpy 1` release):\n\n\n\n```\nnumpy==1.26.4\n\n```\n\nTo be added in your requirements or to the pip command you use (but together with installing pandas).\n\n\nNowadays `pip` is very flexible and can handle the issue flawesly. You just need to ask it to install both `pandas` and `numpy` of given versions in the same `pip install` invocation."}
{"questionId":"c09443d29284fa5ca4047ccd83fa67b4e43eeea900b9fb60b92edaaccd00fb25","question":"\"ImportError: cannot import name 'triu' from 'scipy.linalg'\" when importing Gensim\nI am trying to use Gensim, but running `import gensim` raises this error:\n\n\n\n```\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/__init__.py\", line 11, in <module>\n from gensim import parsing, corpora, matutils, interfaces, models, similarities, utils # noqa:F401\n File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/corpora\/__init__.py\", line 6, in <module>\n from .indexedcorpus import IndexedCorpus # noqa:F401 must appear before the other classes\n File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/corpora\/indexedcorpus.py\", line 14, in <module>\n from gensim import interfaces, utils\n File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/interfaces.py\", line 19, in <module>\n from gensim import utils, matutils\n File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/matutils.py\", line 20, in <module>\n from scipy.linalg import get_blas_funcs, triu\nImportError: cannot import name 'triu' from 'scipy.linalg' (\/usr\/local\/lib\/python3.10\/dist-packages\/scipy\/linalg\/__init__.py)\n\n```\n\nWhy is this happening and how can I fix it?","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"I found the issue.\n\n\n\n> \n> The [`scipy.linalg`](https:\/\/docs.scipy.org\/doc\/scipy\/reference\/linalg.html#module-scipy.linalg) functions `tri`, `triu` & `tril` are deprecated and will be removed in SciPy 1.13.\n> \n> \n> \u2014 [SciPy 1.11.0 Release Notes \u00a7 Deprecated features](https:\/\/docs.scipy.org\/doc\/scipy\/release\/1.11.0-notes.html#deprecated-features)\n> \n> \n> \n\n\nSo, I installed SciPy v1.10.1 instead of the latest version and it was working well.\n\n\n\n```\npip install scipy==1.10.1"}
{"questionId":"15aa333bb9ac289e5956aa4ae98cb69907fc006f12534ab7e257239620f7cac7","question":"If a lambda is declared inside a default argument, is it different for each call site?\n#include <iostream>\n\nint foo(int x = [](){ static int x = 0; return ++x; }()) {\n return x;\n};\n\nint main() {\n std::cout << foo() << foo(); \/\/ prints \"12\", not \"11\"\n}\n\n```\n\nI know that default arguments are evaluated each time a function is called. Does that mean that the lambda type is different on each call? Please point to the standard quotes explaining the behaviour here.","questionMetadata":{"type":"conceptual","tag":"c++","level":"advanced"},"answer":"This all comes down to the interpretation of [[expr.prim.lambda.closure]\/1](http:\/\/eel.is\/c++draft\/expr.prim.lambda.closure#1):\n\n\n\n> \n> The type of a *lambda-expression* (which is also the type of the closure object) is a unique, unnamed non-union class type, called the closure type, whose properties are described below.\n> \n> \n> \n\n\nWhat does \"unique\" mean?\n\n\n\"The type of a *lambda-expression*... is... unique...\"\n\n\nThe first word is \"the\". **The type.** Implying that a *lambda-expression* has **one** type. But since it's \"unique\", that means that any two *lambda-expressions* have different types.\n\n\nThe word \"*lambda-expression*\" is italicized, denoting a grammar term. A lambda appearing lexically once, but evaluated more than once, is the same *lambda-expression* on each evaluation. So it has the same type in each evaluation.\n\n\nThe fact that a default argument is evaluated every time a function is called does not mean that the program behaves as if the default argument were repeated verbatim at each call site. A default argument is a piece of code that runs whenever it's used, just like a function body.\n\n\nNote, however, that instantiating a template *does* stamp out a copy of each grammar production that occurs in the template definition (though, for name lookup purposes, it's not the same as \"replaying the tokens\" at the instantiation point). In other words, if you have a *lambda-expression* inside a template and you instantiate that template, the resulting specialization has its own *lambda-expression* that is the result of instantiating the one from the template. Thus, each specialization gets a distinct type for the lambda, even though those lambdas were all defined by the same original piece of source code.\n\n\nThere are also cases where two lambdas appearing in different translation units actually have the same type. This occurs because there is a rule that can force multiple identical pieces of source code from different translation units to behave as if only one copy in the program. [[basic.def.odr]\/17](http:\/\/eel.is\/c++draft\/basic.def.odr#17)"}
{"questionId":"9aa19f209ce7c6deca7f050d8731c1e3b3fb8c83f3f9b1e3143b442242f227d7","question":"Why is \"dict[int, int]\" incompatible with \"dict[int, int | str]\"?\nimport typing\n\na: dict[int, int] = {}\nb: dict[int, int | str] = a\nc: typing.Mapping[int, int | str] = a\nd: typing.Mapping[int | str, int] = a\n\n```\n\nPylance reports an error for `b: dict[int, int | str] = a`:\n\n\n\n```\nExpression of type \"dict[int, int]\" is incompatible with declared type \"dict[int, int | str]\"\n \"dict[int, int]\" is incompatible with \"dict[int, int | str]\"\n Type parameter \"_VT@dict\" is invariant, but \"int\" is not the same as \"int | str\"\n Consider switching from \"dict\" to \"Mapping\" which is covariant in the value type\n\n```\n\nBut `c: typing.Mapping[int, int | str] = a` is OK.\n\n\nAdditionally, `d: typing.Mapping[int | str, int] = a` also gets an error:\n\n\n\n```\nExpression of type \"dict[int, int]\" is incompatible with declared type \"Mapping[int | str, int]\"\n \"dict[int, int]\" is incompatible with \"Mapping[int | str, int]\"\n Type parameter \"_KT@Mapping\" is invariant, but \"int\" is not the same as \"int | str\"\n\n```\n\nWhy are these types hint incompatible? \n\nIf a function declares a parameter of type `dict[int, int | str]`, how can I pass a `dict[int, int]` object as its parameter?","questionMetadata":{"type":"conceptual","tag":"python","level":"intermediate"},"answer":"dict` type was designed to be completely ***invariant*** on key and value. Hence when you assign `dict[int, **int**]` to `dict[int, **int | str**]`, you make the type system raise errors. [1]\n\n\n`Mapping` type on the other hand wasn\u2019t designed to be completely invariant but rather is ***invariant on key*** and ***covariant on value***. Hence you can assign one `Mapping` type (`dict[int, **int**]`) to another (`Mapping[int, **int | str**]`) if they are both covariant on value. if they are invariant on key, you can assign them else you cannot. Hence when you assign `dict[**int**, int]` to `Mapping[**int | str**, int]`, you make the type system raise errors. [2][3]\n\n\nThere is a good reason for the above design in the type system and I will give a few:\n\n\n***1***. `dict` type is a concrete type so it will actually get used in a program.\n\n\n***2***. Because of the above mentioned, it was designed the way it was to avoid things like this:\n\n\n\n```\na: dict[int, int] = {}\nb: dict[int, int | str] = a\nb[0] = **0xDEADBEEF**\nb[1] = **\"Bull\"**\n```\n\n***`dict`s are assigned by reference*** [4] hence any mutation to `b` is actually a mutation to `a`. So if one reads `a` as follows:\n\n\n\n```\nx: int = a[0]\nassert isinstance(x, int)\ny: int = a[1]\nassert isinstance(y, int)\n```\n\nOne gets unexpected results. `x` passes but `y` doesn\u2019t. It then seems like the type system is contradicting itself. This can cause worse problems in a program.\n\n\n*For posterity, to correctly type a dictionary in Python, use `Mapping` type to denote a readonly dictionary and use `MutableMapping` type to denote a read-write dictionary*.\n\n\n\n\n---\n\n\n[1] Of course Python\u2019s type system doesn\u2019t influence program\u2019s running behaviour but at least linters have some use of this.\n\n\n[2] `dict` type is a `Mapping` type but `Mapping` type is not a `dict` type.\n\n\n[3] Keep in mind that the *ordering of types* is important in type theory.\n\n\n[4] **All variable names in Python are references to values**."}
{"questionId":"af9d233cd713a35a907337a1933e840f8446860137f165589ec4edac77a38033","question":"Factor from numeric vector drops every 100.000th element from its levels\nConsider a vector of type `numeric` with over 100.000 elements. In the example below, it's simply the range 1:500001.\n\n\n\n```\nn <- 500001\narr <- as.numeric(1:n)\n\n```\n\nThe following sequence of `factor` calls causes odd behaviour:\n\n\nFirst call `factor` with the `levels` argument specified as the exact same range that `arr` was defined with. Predictably, the resulting variable has exactly `n` levels:\n\n\n\n```\n> tmp <- factor(arr, levels=1:n)\n> nlevels(tmp)\n[1] 500001\n\n```\n\nNow call `factor` again on the result from before. The outcome is that the new value, `tmp2`, is missing some values from its levels:\n\n\n\n```\n> tmp2 <- factor(tmp)\n> nlevels(tmp2)\n[1] 499996 \n\n```\n\nChecking to see which items are missing, we find it's every 100.000th element (which, in this case, have value equal to their index):\n\n\n\n```\n> which(!levels(tmp) %in% levels(tmp2))\n[1] 100000 200000 300000 400000 500000 \n\n```\n\nDecreasing `n` to <=100.000 eliminates this unexpected behaviour. However, it occurs for any `n` > 100.000.\n\n\n\n```\n> n <- 99999\n> arr <- as.integer(1:n)\n> tmp <- factor(arr)\n> tmp2 <- factor(tmp)\n> nlevels(tmp2)\n[1] 99999\n> which(!levels(tmp) %in% levels(tmp2))\ninteger(0)\n\n```\n\nThis also does not happen when the `arr` vector has a type other than `numeric`:\n\n\n\n```\n> n <- 500001\n> arr <- as.integer(1:n)\n> tmp <- factor(arr, levels=1:n)\n> tmp2 <- factor(tmp)\n> nlevels(tmp2)\n[1] 500001\n\n```\n\nFinally, the problem does not occur when the `levels` argument is left unspecified in the first call to `factor()`.\n\n\nWhat could be causing this behaviour? Tested in R 4.3.2","questionMetadata":{"type":"conceptual","tag":"r","level":"intermediate"},"answer":"Building on ThomasIsCoding's answer, it is due to the scientific notation rule applying to real numbers, but not applying to integers...\n\n\nFor example, in the console...\n\n\n\n```\noptions(scipen = 0) #uses scientific notation if fewer characters than normal\n\n500000L\n[1] 500000 #integer displayed in normal notation\n\n500000\n[1] 5e+05 #numeric displayed in shorter scientific notation\n\n```\n\nSo the names cause a mismatch with the factor levels for each multiple of 100000 using numeric values.\n\n\nThe problem can be solved by increased `scipen`.\n\n\nI thought `scipen` was primarily to control displayed values, so it is odd that it is being used for factor levels."}
{"questionId":"ed35826cba1dd94756080b6f21c9e6472229e667b9edbdc1f9582bab59ea1032","question":"Why are random integers generated by multiplying by MAX\\_SAFE\\_INTEGER not evenly distributed between odd and even?\nTrying to generate a number using MAX\\_SAFE\\_INTEGER I noticed something strange, I'm sure it has to do with the way numbers are stored in JavaScript, but I don't understand what exactly it is.\n\n\n\n```\n\/\/ Always returns an odd number\nMath.floor(Math.random() * Number.MAX_SAFE_INTEGER)\n\n\/\/ Returns an odd number 75% of the time\nMath.floor(Math.random() * (Number.MAX_SAFE_INTEGER - 1))\n\n\/\/ Has a 50\/50 chance to return odd or even\nMath.ceil(Math.random() * Number.MAX_SAFE_INTEGER)\n\n```\n\nHow can this behavior be explained and what would be the largest integer you can use in `Math.floor` to get a 50\/50 ratio?\n\n\n\n\n\n```\nlet evenCount = 0, oddCount = 0;\n\nfor (let i = 0; i < 10000; i++) {\n const randomNumber = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER);\n if (randomNumber % 2 === 0) {\n evenCount++;\n } else {\n oddCount++;\n }\n}\n\nconsole.log(\"Number of even numbers:\", evenCount);\nconsole.log(\"Number of odd numbers:\", oddCount);","questionMetadata":{"type":"conceptual","tag":"javascript","level":"intermediate"},"answer":"First, you should multiply by 253 (`Number.MAX_SAFE_INTEGER + 1`) to get all 53 bits from a `Math.random` implementation that uses the full double precision. 253\u22121 doesn\u2019t hurt much (it maps both 0 and 2\u221253 to 0, producing a tiny bias), but it\u2019s better to pick the solution that\u2019s obviously correct.\n\n\nBut then what\u2019s the issue? Well, your original code works fine on Firefox and Safari! It\u2019s just that V8 (i.e. Chrome and derivatives) uses 52 bits instead of 53.\n\n\n\n\n\n```\nlet mostBits = 0;\n\nfor (let i = 0; i < 10000; i++) {\n const bits = Math.random().toString(2).slice(2).length;\n if (bits > mostBits) {\n mostBits = bits;\n }\n}\n\nconsole.log(\"Most bits:\", mostBits);\n```\n\n\n\n\n\n\n(Firefox, Safari)\n\n\n\n> \n> Most bits: 53\n> \n> \n> \n\n\n(Chrome)\n\n\n\n> \n> Most bits: 52\n> \n> \n> \n\n\n(The reason that you can store 53 bits accurately with a significand with 52 bits of storage is that the integer part is implicitly a 1 that can be scaled to the right place by the exponent, same as why `Number.MAX_SAFE_INTEGER` is what it is.)\n\n\nLooking at [the relevant part of V8\u2019s implementation](https:\/\/github.com\/v8\/v8\/blob\/fa10a1917f41dc1028c9f55fb92e7fcc33c34b79\/src\/base\/utils\/random-number-generator.h#L111-L116), I assume the only reason it does this is for performance \u2013 by fixing the exponent to make the range [1, 2), it can insert the random bits directly into the double instead of having to perform a multiplication.\n\n\n\n> \n> \n> ```\n> static inline double ToDouble(uint64_t state0) {\n> \/\/ Exponent for double values for [1.0 .. 2.0)\n> static const uint64_t kExponentBits = uint64_t{0x3FF0000000000000};\n> uint64_t random = (state0 >> 12) | kExponentBits;\n> return base::bit_cast<double>(random) - 1;\n> }\n> \n> ```\n> \n> \n\n\nWhy does multiplying a number in the final result\u2019s range by 253\u22121 and then flooring it always produce an odd number?\n\n\n- (253\u22121)x = 253 x \u2212 x (exactly)\n- 253 x is always even\n- In order for 253 x \u2212 x to round to the exact floating-point value 253 x (and therefore be an even number), x has to be smaller than 253 x\u2019s ULP (unit in the last place) \u2013 which it never can be! x\u2019s ULP is 1\/253 of the value of its most significant bit, which is \u2264 x.\n\n\nSo to answer your question,\n\n\n\n> \n> what would be the largest integer you can use in `Math.floor` to get a 50\/50 ratio?\n> \n> \n> \n\n\nAt most 252, but I wouldn\u2019t *count* on `Math.random` having more than 32 bits of randomness unless you\u2019re only targeting one engine (V8 changed to 52 [in 2015](https:\/\/v8.dev\/blog\/math-random), for example), or even on it being good enough randomness for a particular purpose \u2013 none of this stuff is in [the spec](https:\/\/tc39.es\/ecma262\/#sec-math.random).\n\n\n\n> \n> This function returns a Number value with positive sign, greater than or equal to +0 but strictly less than 1, chosen randomly or pseudo randomly with approximately uniform distribution over that range, **using an implementation-defined algorithm or strategy**.\n> \n> \n> \n\n\nYou might want to consider implementing a known PRNG in JavaScript and seeding it with strong randomness from [`crypto.getRandomValues`](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Crypto\/getRandomValues)."}
{"questionId":"47a6304486e50b4a6765230b0128b8ef40c3ff593731b9f4f84203fa401ca85e","question":"Error: Type 'FontFeature' not found. Flutter google\\_fonts package error\nWhen using the latest version of google\\_fonts (6.2.0) package in flutter project, I'm facing this 'Type FontFeature not found' issue:\n\n\n\n```\n\/C:\/Users\/Dell\/AppData\/Local\/Pub\/Cache\/hosted\/pub.dev\/google_fonts-6.2.0\/lib\/src\/google_fonts_base.dart:69:8: \nError: 'FontFeature' isn't a type.\n List<FontFeature>? fontFeatures,\n ^^^^^^^^^^^\nTarget kernel_snapshot failed: Exception\n\n\nFAILURE: Build failed with an exception.\n\n* What went wrong:\nExecution failed for task ':app:compileFlutterBuildDebug'.\n> Process 'command 'C:\\flutter\\flutter\\bin\\flutter.bat'' finished with non-zero exit value 1\n\n```\n\nI tried downgrading the package version but the issue persists.\nAlso tried flutter clean and pub get..","questionMetadata":{"type":"version","tag":"dart","level":"intermediate"},"answer":"To everyone facing this issue, downgrade to an older version 6.1.0 (recommended) to get this issue solved.\nAlso, keep this in mind:\n\n\n\n> \n> When you downgrade packages, make sure to remove the caret (^) in front of the version number. For example: google\\_fonts: 6.1.0\n> \n> \n> \n\n\nThanks to [@Dhafin Rayhan](https:\/\/stackoverflow.com\/users\/13625293\/dhafin-rayhan) for pointing it out."}
{"questionId":"d87a826c9c896bff461dc228968ca23d9f446ce1f65a4cbc7400144b0bd8cf1f","question":"Angular Material 18: mat.define-palette() causes \"Undefined function\" error\nAfter upgrading my Angular core libraries to version 18, I **migrated to Angular Material 18** by running:\n\n\n`ng update @angular\/material`\n\n\nThe update went smoothly but when I tried to compile my app I got the following error:\n\n\n\n```\nX [ERROR] Undefined function.\n \u2577\n14 \u2502 $myapp-theme-primary: mat.define-palette(mat.$indigo-palette, A400, A100, A700);\n \u2502 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n \u2575\n src\\styles.scss 14:23 root stylesheet [plugin angular-sass]\n\n angular:styles\/global:styles:2:8:\n 2 \u2502 @import 'src\/styles.scss';\n \u2575 ~~~~~~~~~~~~~~~~~\n\n```\n\nMy `styles.scss` worked perfectly with the previous version of Angular Material (**v.17**). It looks as follows:\n\n\n\n```\n@use '@angular\/material' as mat;\n@include mat.core();\n\n$myapp-theme-primary: mat.define-palette(mat.$indigo-palette, A400, A100, A700);\n$myapp-theme-accent: mat.define-palette(mat.$indigo-palette);\n$myapp-theme-warn: mat.define-palette(mat.$red-palette);\n\n$myapp-theme: mat.define-light-theme((\n color: (\n primary: $myapp-theme-primary,\n accent: $myapp-theme-accent,\n warn: $myapp-theme-warn,\n )\n));\n\n@include mat.all-component-themes($myapp-theme);\n\n```\n\nHow do I have to adapt my code in `styles.scss` in order to make it work with Angular Material 18?","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"Seems like the automatic migration (`ng update @angular\/material`) did not fully work for your app. `define-palette` and some other functions were deprecated and replaced with similar named functions using the `m2-` prefix:\n\n\nSee: <https:\/\/github.com\/angular\/components\/releases\/tag\/18.0.0> (look for the section *Breaking Changes*)\n\n\nChange your code as follows:\n\n\n\n```\n@use '@angular\/material' as mat;\n@include mat.core();\n\n$myapp-theme-primary: mat.m2-define-palette(mat.$m2-indigo-palette, A400, A100, A700);\n$myapp-theme-accent: mat.m2-define-palette(mat.$m2-indigo-palette);\n$myapp-theme-warn: mat.m2-define-palette(mat.$m2-red-palette);\n\n$myapp-theme: mat.m2-define-light-theme((\n color: (\n primary: $myapp-theme-primary,\n accent: $myapp-theme-accent,\n warn: $myapp-theme-warn,\n )\n));\n\n@include mat.all-component-themes($myapp-theme);"}
{"questionId":"2f056b5d0932a9055b818ea6304a617470a35b80e804afe1f24334563aae88da","question":"When is C++23 auto(x) useful?\nI'm looking for a simple example when C++ 23 `auto(x)` could be useful.\n\n\nThis is what I have so far:\n\n\n\n```\nstruct A {\n A() = default;\n explicit A(const A&) {} \/\/ copy constructor\n};\nstruct B {\n A child;\n};\n\ntemplate<class T>\nvoid printChild(T t) {\n}\n\ntemplate<class T>\nvoid printParent(T t) {\n \/\/ printChild(t.child); \/\/ Error - copy ctr is explicit\n printChild(A(t.child)); \/\/ if we knew the type\n printChild(auto(t.child)); \/\/ in C++23\n}\n\n\n \nint main() {\n B b;\n printParent(b);\n}\n\n```\n\nNow I'm looking for a simple example without an explicit constructor, and perhaps another one where the `decay_copy` benefit is shown. The examples I've found online so far have not been clear to me.","questionMetadata":{"type":"version","tag":"c++","level":"intermediate"},"answer":"auto(...)` has the benefit that it always clearly communicates that a copy is needed and intended.\nThis is one of the motivations for the original proposal, [P0849R8: `auto(x)`: *decay-copy* in the language](https:\/\/www.open-std.org\/jtc1\/sc22\/wg21\/docs\/papers\/2021\/p0849r8.html)\n\n\nWhile you could write\n\n\n\n```\n\/\/ assuming non-explicit copy constructor\nauto copy = t.child;\nprintChild(copy);\n\n```\n\n... it's not obvious to the reader that the extra variable is needed (or not).\nBy comparison, `printChild(auto(t.child));` is expressing the intent to copy very clearly, and it works even if you don't know the type of `copy` or if the type is very lengthy and annoying to spell out.\n\n\nOf course, since `printChild` accepts any `T` by value, you could just write `printChild(t.child)` and let the copy take place implicitly.\nHowever, in generic code, you typically work with forwarding references or other kinds of references, not values.\nYou don't want to pass things by value if you don't know whether they're small types.\n\n\nA motivating example comes from the proposal itself (slightly adapted):\n\n\n\n```\nvoid pop_front_alike(Container auto& x) {\n std::erase(x, auto(x.front()));\n}\n\n```\n\n*Note: the copy of `x.front()` is needed here because erasing vector contents would invalidate the reference obtained from `x.front()` and passed to `std::erase`.*\n\n\nOutside of templates, you often should pass stuff by rvalue reference as well, as recommended by CppCoreGuidelines [F.18: For \u201cwill-move-from\u201d parameters, pass by X&& and std::move the parameter](http:\/\/isocpp.github.io\/CppCoreGuidelines\/CppCoreGuidelines#Rf-consume):\n\n\n\n```\n\/\/ CppCoreGuidelines recommends passing vector by rvalue ref here.\nvoid sink(std::vector<int>&& v) {\n store_somewhere(std::move(v));\n}\n\nstruct S {\n std::vector<int> numbers;\n void foo() {\n \/\/ We need to copy if we don't want to forfeit our own numbers:\n \n sink(numbers); \/\/ error: rvalue reference cannot bind to lvalue\n sink(std::vector<int>(numbers)); \/\/ OK but annoying\n sink(auto(numbers)); \/\/ :)\n }\n};\n\n```\n\nLast but not least, you can simply look at the [C++20 standard](https:\/\/isocpp.org\/files\/papers\/N4860.pdf).\nThere are 43 occurrences of *decay-copy* in the document, and any use of *decay-copy* can usually be replaced with `auto(x)`.\nTo name some examples,\n\n\n- `std::ranges::data(t)` may expand to `*decay-copy*(t.data())`, and\n- the `std::thread` constructor applies *decay-copy* to each argument."}
{"questionId":"3e1def3d9f8c084626f110d2c520c7d786555d95711b34f231efa3f057c0c060","question":"Why is it possible to refer to enum values when a field is named identically to the enum type, but not when the field type is made nullable?\nWhy does `A` work, but `B` fails to compile?\n\n\nIs it a bug? If not, where this different behavior is described\/specified?\n\n\n\n```\nenum ControlType { Foo }\n\nclass A\n{\n public ControlType ControlType = ControlType.Foo;\n}\n\nclass B\n{\n public ControlType? ControlType = ControlType.Foo; \/\/ <-- error CS0236: A field initializer cannot reference the non-static field, method, or property 'B.ControlType'\n}","questionMetadata":{"type":"conceptual","tag":"c#","level":"intermediate"},"answer":"The difference is whether the situation ends up meeting the requirements of [section 12.8.7.2 of the C# spec](https:\/\/github.com\/dotnet\/csharpstandard\/blob\/draft-v8\/standard\/expressions.md#12872-identical-simple-names-and-type-names) - \"Identical simple names and type names\".\n\n\n\n> \n> In a member access of the form `E.I`, if `E` is a single identifier, and if the meaning of `E` as a *simple\\_name* (\u00a712.8.4) is a constant, field, property, local variable, or parameter with the same type as the meaning of `E` as a *type\\_name* (\u00a77.8.1), then both possible meanings of `E` are permitted. The member lookup of `E.I` is never ambiguous, since `I` shall necessarily be a member of the type `E` in both cases. In other words, the rule simply permits access to the static members and nested types of `E` where a compile-time error would otherwise have occurred.\n> \n> \n> \n\n\nIn your case A, `ControlType.Foo` looks up `ControlType`, finds that it's a property *with the same type* as `E` (`ControlType`) and so allows the member lookup of `Foo` both as a static member of the type *and* as an member (via effectively accessing the `ControlType` property).\n\n\nIn your case B, `ControlType.Foo` looks up `ControlType`, finds that it's a property *with a different type* to `E` (it's `ControlType?` this time) so the member lookup proceeds *only* with the members of `ControlType?`."}
{"questionId":"69895dca59121fa168a3dbd7343ad9abb0f85c73c9421b30b36b30edc570e5b9","question":"C++20 std::vector comparison weird behaviour\nOverloading the `operator bool()` for a custom class `T` breaks `std::vector<T>` comparison operators.\n\n\nThe following code tried on the first online compiler google suggest me prints\n\n\n\n```\nv1 > v2: 0\nv1 < v2: 1\n\n```\n\nwhen `operator bool()` is commented and\n\n\n\n```\nv1 > v2: 0\nv1 < v2: 0\n\n```\n\nwhen it's uncommented.\n\n\n\n```\n#include <iostream>\n#include <vector>\n\nclass T {\n int _value;\npublic:\n constexpr T(int value) : _value(value) {}\n constexpr bool operator==(const T rhs) const { return _value == rhs._value; }\n constexpr bool operator!=(const T rhs) const { return _value != rhs._value; }\n constexpr bool operator <(const T rhs) const { return _value < rhs._value; }\n constexpr bool operator >(const T rhs) const { return _value > rhs._value; }\n \n \/\/constexpr operator bool() const { return _value; } \/\/ <-- breaks comparison\n};\n\nint main()\n{\n auto v1 = std::vector<T>{1,2,3};\n auto v2 = std::vector<T>{1,2,9};\n std::cout << \"v1 > v2: \" << (v1 > v2) << std::endl;\n std::cout << \"v1 < v2: \" << (v1 < v2) << std::endl;\n return 0;\n}\n\n```\n\nThis appears to be true only starting from C++20. What's changed underneath in `std::vector`?","questionMetadata":{"type":"version","tag":"c++","level":"intermediate"},"answer":"C++20 replaces the individual `<`,`<=`,`>`,`>=` operators of `std::vector` (and of many other standard classes) with a single `<=>`.\n\n\nInternally it tries to use `<=>` to compare the elements, and falls back to the old operators if the type doesn't overload `<=>`.\n\n\nSince you have a non-explicit `operator bool`, applying `<=>` converts both operands to bool and compares those. The fix is to make `operator bool` `explicit` (which is a good idea in general) (so that `<=>` fails and `vector` falls back to the old operators), and\/or replace `<`,`<=`,`>`,`>=` with `<=>` (which is also a good idea in general)."}
{"questionId":"1b73803098949a95bdf2bd1c5fa7ecf5533cd59befffbecc6e644c4f3474adff","question":"HttpClientModule is deprecated in Angular 18, what's the replacement?\nI have a project I migrated to Angular 18 with a setup to use the `HttpClient` by importing the `HttpClientModule`.\n\n\n\n```\n@NgModule({\n imports: [\n BrowserModule,\n HttpClientModule,\n ...\n ],\n declarations: [\n AppComponent,\n ...\n ],\n bootstrap: [ AppComponent ]\n})\nexport class AppModule {} \n\n```\n\nIn v17 `HttpClientModule` everything was fine but now it is marked as deprecated.\n\n\nWhy is it deprecated and what is the replacement ?","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"The `HttpClientModule` was superseeded by the already existing `provideHttpClient()` provider function.\n\n\n\n```\n@NgModule({\n imports: [\n BrowserModule,\n \/\/ Remove the module \n ...\n ],\n declarations: [\n AppComponent,\n ...\n ],\n providers: [provideHttpClient()] \/\/ add it here\n bootstrap: [ AppComponent ]\n})\nexport class AppModule {} \n\n```\n\nIf you see the following error: `Type 'EnvironmentProviders' is not assignable to type 'Provider'.`, it means you were importing the `HttpClientModule` in a component. This shouldn't have happen in the first place. Simply remove the import.\n\n\nIf you are relies on standalone component, `provideHttpClient()` needs to be added to the providers when invoking `bootstrapApplicati()` :\n\n\n\n```\nboostrapApplication(AppComponent, {providers: [provideHttpClient()]});\n\n```\n\n\n\n---\n\n\nThe reason behind this change is that the `HttpClientModule` doubled-up the `provideHttpClient()` function that was introduced for standalone apps.\n\n\nAnd here is an extract of the [Angular source code](https:\/\/github.com\/angular\/angular\/blob\/1872fcd8e09fefb52f9b36e8261702cd6fb03f85\/packages\/common\/http\/src\/module.ts#L96-L103), the module was really just providing the HttpClient. (No declarations, imports or export whatsoever)\n\n\n\n```\n@NgModule({\n providers: [provideHttpClient(withInterceptorsFromDi())],\n})\n\nexport class HttpClientModule {}\n\n```\n\nSo the team chose to deprecate it and the deprecation message suggests to use the `provideHttpClient()` provider function. This way devs would less be inclined to have both the module and the provider declared. Which was a common issue amongst new developers."}
{"questionId":"1ba004f9bbfafb9ac88f61b88368c27b451679d13af026285fb5b5371a29b352","question":"How can I run a function at the end of a pipe?\nThis code makes a sentence from two different columns in a data frame\n\n\n\n```\nlibrary(dplyr); library(tibble); library(magrittr)\n\nmtcars %>% \n rownames_to_column(var = \"car\") %>%\n sample_n(5) -> \n df\n\npaste0(df$car, \" (\", df$mpg, \")\", collapse = \", \")\n\n# \"Mazda RX4 Wag (21), Hornet Sportabout (18.7), Merc 280 (19.2), Dodge Challenger (15.5), Merc 450SLC (15.2)\"\n\n```\n\nBut instead of having `paste0(df$car, \" (\", df$mpg, \")\", collapse = \", \")` run on a standalone line, how can I get it to run at end of pipe like the below (which throws an error as written):\n\n\n\n```\nmtcars %>% \n rownames_to_column(var = \"car\") %>%\n sample_n(5) %>%\n paste0(df$car, \" (\", df$mpg, \")\", collapse = \", \")","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"with()` would work for this:\n\n\n\n```\nmtcars %>% \n rownames_to_column(var = \"car\") %>%\n sample_n(5) %>% \n with(paste0(car, \" (\", mpg, \")\", collapse = \",\"))\n\n```\n\nAnother possibility would be to end the pipe with:\n\n\n\n```\n... %>% \n mutate(word = glue(\"{car} ({mpg})\")) %>% \n pull(word) %>% \n paste0(collapse =\", \")"}
{"questionId":"ca4149d48778710b4793a7f1e8804927f803b0f20a9e19760dbb7fb35ce759d3","question":"How can I create a matrix with a vector of 1 in the diagonal?\nI'm trying to create a diagonal matrix with 390 rows and 2340 columns, but in the diagonal I need to have a vector of 1, `rep(1,6)`.\n\n\nFor example, these should be the first two rows:\n\n\n\n```\n 1111110.............................0\n 0000001111110.......................0\n\n```\n\nHow can I do it?","questionMetadata":{"type":"implementation","tag":"r","level":"beginner"},"answer":"Thinking of it row by row, you want six ones, followed by 2340 zeros (six of which overflow into the next row, shifting the sequence of ones by six columns), repeated over and over again. So you can acheive this by doing:\n\n\n\n```\nmatrix(c(rep(1, 6), rep(0, 2340)), ncol = 2340, nrow = 390, byrow = TRUE)\n\n```\n\nNote that there will be a warning about the data not being a multiple of the number of rows, but that's expected: the zeros on the last row would be assigned to row 391, but we say we only want 390, so the data gets truncated.\n\n\nYou can verify the result with:\n\n\n\n```\nx[1:20, 1:20] # Top-left corner\nx[371:390, 2321:2340] # Bottom-right corner"}
{"questionId":"3b2d19b4ecfcef2ed9941269d861d47cdd70ea1ae7a03188d615a5154ff20230","question":"How can I subtract a number from string elements in R?\nI have a long string. The part is\n\n\n\n```\nx <- \"Text1 q10_1 text2 q17 text3 q22_5 ...\"\n\n```\n\nHow can I subtract 1 from each number after \"q\" letter to obtain the following?\n\n\n\n```\ny <- \"Text1 q9_1 text2 q16 text3 q21_5 ...\"\n\n```\n\nI can extract all my numbers from x:\n\n\n\n```\nnumbers <- stringr::str_extract_all(x, \"(?<=q)\\\\d+\")\nnumbers <- as.integer(numbers[[1]]) - 1\n\n```\n\nBut how can I update x with these new numbers?\n\n\nThe following is not working\n\n\n\n```\nstringr::str_replace_all(x, \"(?<=q)\\\\d+\", as.character(numbers))","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"I learned today that `stringr::str_replace_all` will take a function:\n\n\n\n```\nstringr::str_replace_all(\n x, \n \"(?<=q)\\\\d+\", \n \\(x) as.character(as.integer(x) - 1)\n)"}
{"questionId":"9223d95caf8663514736230c53f807d477f1bcf3faccaae5d0377fdde206ce26","question":"A class both derives from and its first member has type deriving from the same base class. Is the class standard-layout?\nAs far as I know, a property of the standard-layout class is that the address of a standard-layout object is equal to its initial member's. I tested the following code with g++ and clang++, but found that `Derived3` **is** a standard-layout class and `&d` **is not** equal to `&d.c`.\n\n\n\n```\n#include <iostream>\nusing namespace std;\n\nstruct Base {};\n\nstruct Derived1 : Base\n{\n int i;\n};\n\nstruct Derived3 : Base\n{\n Derived1 c;\n int i;\n};\n\nint main()\n{\n cout << is_standard_layout_v<Derived3> << endl;\n\n Derived3 d;\n cout << &d << endl;\n cout << &d.c << endl;\n\n return 0;\n}","questionMetadata":{"type":"conceptual","tag":"c++","level":"advanced"},"answer":"Following the word of [the standard](https:\/\/timsong-cpp.github.io\/cppwp\/class.prop#3), they are indeed standard-layout types. Going through the points one by one:\n\n\n\n> \n> A class S is a standard-layout class if it:\n> \n> \n> - has no non-static data members of type non-standard-layout class (or array of such types) or reference, [...]\n> \n> \n> \n\n\n`int` is standard-layout. `Derived1` is standard layout, as we'll see.\n\n\n\n> \n> - has no non-standard-layout base classes,\n> \n> \n> \n\n\n`Base` is empty, so standard-layout.\n\n\n\n> \n> - has at most one base class subobject of any given type,\n> \n> \n> \n\n\nBoth `Derived1` and `Derived3` has only a single base `Base`.\n\n\n\n> \n> - has all non-static data members and bit-fields in the class and its base classes first declared in the same class, and\n> \n> \n> \n\n\nMeaning, within an inheritance hierarchy, all data members are declared in the same class. This is clearly true for `Derived1`. This is also true for `Derived3` because `Derived1` is not in the inheritance hierarchy.\n\n\nTo make this point clearer, consider a simpler example\n\n\n\n```\nstruct B {};\nstruct D1 : B {};\nstruct D3 : B { D1 c; };\n\n```\n\nWhich also runs into the same address problems as in the question, but clearly fulfills this bullet point.\n\n\n\n> \n> - has no element of the set M(S) of types as a base class, where for any type X, M(X) is defined as follows.\n> [Note 2:\u2002M(X) is the set of the types of all non-base-class subobjects that can be at a zero offset in X. \u2014 end note]\n> \t- If X is a non-union class type with no non-static data members, the set M(X) is empty.\n> \t- If X is a non-union class type with a non-static data member of type X0 that is either of zero size or is the first non-static data member of X (where said member may be an anonymous union), the set M(X) consists of X0 and the elements of M(X0). [...]\n> \n> \n> \n\n\nMeaning, `M(Derived3)` is the set {`Derived1`, `int`}, none of which is a base class of `Derived3`.\n\n\nLikewise, `M(Derived1)` is the set {`int`}, which is not a base class of `Derived1`.\n\n\n\n\n---\n\n\nBeing standard-layout means the class and its first data member is [pointer-interconvertible](https:\/\/timsong-cpp.github.io\/cppwp\/basic.compound#4). To be pedantic, the representation of pointers being different doesn't prove there's a problem, but comparing the results of `reinterpret_cast` does:\n\n\n\n```\nstd::cout << (&d.c == reinterpret_cast<Derived1*>(&d)); \/\/ 0 for clang and gcc\n\n```\n\nThus the compilers are not technically compliant. However, this is an impossible situation: the `Base` subobject in `Derived1` [cannot have the same address](https:\/\/timsong-cpp.github.io\/cppwp\/intro.object#10.2) as the `Base` subobject in `Derived3`, which is why the compilers placed `Derived1` at a four byte offset from the start.\n\n\nStandard-layout classes have a [history](https:\/\/cplusplus.github.io\/CWG\/issues\/1813.html) of [defect](https:\/\/cplusplus.github.io\/CWG\/issues\/1672.html) reports, and this looks like it should be another one."}
{"questionId":"217ff20a719b13471ddb3ce3af93a0c547d967c15b7ac922f7839bd7468c4c1b","question":"unrecognized time zone\nWith a recent update on Ubuntu (23.10 mantic), my R no longer recognizes `\"US\/Eastern\"`.\n\n\n\n```\nsessionInfo()\n# R version 4.3.2 (2023-10-31)\n# Platform: x86_64-pc-linux-gnu (64-bit)\n# Running under: Ubuntu 23.10\n# Matrix products: default\n# BLAS: \/opt\/R\/4.3.2\/lib\/R\/lib\/libRblas.so \n# LAPACK: \/usr\/lib\/x86_64-linux-gnu\/openblas-pthread\/liblapack.so.3; LAPACK version 3.11.0\n# locale:\n# [1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8 LC_MESSAGES=C.UTF-8 \n# [7] LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C \n# time zone: America\/New_York\n# tzcode source: system (glibc)\n# attached base packages:\n# [1] stats graphics grDevices utils datasets methods base \n# other attached packages:\n# [1] r2_0.10.0\n# loaded via a namespace (and not attached):\n# [1] compiler_4.3.2 clipr_0.8.0 fastmap_1.1.1 cli_3.6.2 tools_4.3.2 htmltools_0.5.7 rmarkdown_2.25 knitr_1.45 xfun_0.41 \n# [10] digest_0.6.34 rlang_1.1.3 evaluate_0.23 \n\nlubridate::with_tz(Sys.time(), tzone = \"US\/Eastern\")\n# Warning in with_tz.default(Sys.time(), tzone = \"US\/Eastern\") :\n# Unrecognized time zone 'US\/Eastern'\n# [1] \"2024-03-18 13:49:56\"\n\n```\n\nOn a similarly-configured (R-wise) 22.04 jammy system, however, it works just fine.\n\n\n\n```\nsessionInfo()\n# R version 4.3.2 (2023-10-31)\n# Platform: x86_64-pc-linux-gnu (64-bit)\n# Running under: Ubuntu 22.04.4 LTS\n# Matrix products: default\n# BLAS: \/usr\/lib\/x86_64-linux-gnu\/openblas-pthread\/libblas.so.3\n# LAPACK: \/usr\/lib\/x86_64-linux-gnu\/openblas-pthread\/libopenblasp-r0.3.20.so; LAPACK version 3.10.0\n# locale:\n# [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C\n# time zone: Etc\/UTC\n# tzcode source: system (glibc)\n# attached base packages:\n# [1] stats graphics grDevices utils datasets methods base\n# loaded via a namespace (and not attached):\n# [1] compiler_4.3.2\n\nlubridate::with_tz(Sys.time(), tzone = \"US\/Eastern\")\n# [1] \"2024-03-18 09:49:19 EDT\"\n\n```\n\nWhy does a normally-recognized TZ become unusable?\n\n\n\n\n---\n\n\nThis is true on the OS itself, not just in R:\n\n\n\n```\n$ TZ=\"America\/New_York\" date\nMon Mar 18 10:22:03 AM EDT 2024\n$ TZ=\"US\/Eastern\" date\nMon Mar 18 02:22:07 PM 2024\n\n```\n\n(notice the missing TZ in the second output)","questionMetadata":{"type":"version","tag":"r","level":"intermediate"},"answer":"The debate over the use of \"Country\/Region\" (e.g. `\"US\/Eastern\"`) as opposed to \"Continent\/City\" (`\"America\/New_York\"`) is not new. There is less ambiguity in the latter, where geopolitical forces can change the meaning of the former. So far (and still, afaict), the stance has been to maintain backward compatibility.\n\n\nHowever, when `tzdata` 2024 was released, on Ubuntu 23.10 the package (`2024a-0ubuntu0.23.10`) does not include the `US\/` symlinks; the same package on Ubuntu 22.04 *does* contain the links (`2024a-0ubuntu0.22.04`)\n\n\nBased on <https:\/\/bugs.launchpad.net\/ubuntu\/+source\/tzdata\/+bug\/2058249>, the proper (and intended) fix is to install the `tzdata-legacy` linux package (and then restart R).\n\n\nMy first solution\/hack is below, written before I learned about the `tzdata-legacy` package (above). The hack was easy enough given that I have root access to the underlying filesystem. Unless you are loath to installing the extra package for some reason, you should likely go with `tzdata-legacy` instead. (These symlinks are the few that I wanted, the `tzdata-legacy` package has another 675 symlinks\/files. The package split affects a lot more than just `US\/*`, after all.)\n\n\n\n```\nmkdir \/usr\/share\/zoneinfo\/US\ncd \/usr\/share\/zoneinfo\/US\nln -s ..\/America\/Anchorage Alaska\nln -s ..\/America\/Adak Aleutian\nln -s ..\/America\/Phoenix Arizona\nln -s ..\/America\/Chicago Central\nln -s ..\/America\/New_York Eastern\nln -s ..\/America\/Indiana\/Indianapolis East-Indiana\nln -s ..\/Pacific\/Honolulu Hawaii\nln -s ..\/America\/Indiana\/Knox Indiana-Starke\nln -s ..\/America\/Detroit Michigan\nln -s ..\/America\/Denver Mountain\nln -s ..\/America\/Los_Angeles Pacific\nln -s ..\/Pacific\/Pago_Pago Samoa\n\n```\n\nAfter that, restart R (\"should not\" require reinstalling `lubridate` or `timechange` R packages) and it should then work. (I don't use RStudio, but you may need to restart that as well ... feedback on this is welcome.)\n\n\n\n```\nlubridate::with_tz(Sys.time(), tzone = \"US\/Eastern\")\n# [1] \"2024-03-18 09:55:08 EDT\"\n\n```\n\nAnd in a shell (outside of R) as well:\n\n\n\n```\n$ TZ=\"US\/Eastern\" date\nMon Mar 18 10:23:11 AM EDT 2024"}
{"questionId":"48f1d374a8b9bdee04722ff211e3f8b4a6c8e5441bc67b4827490c488325fbe5","question":"Make `git rebase -i` present and apply commits bottom to top\nGit shows commits in the interactive-rebase todo list (produced by `git rebase -i`) in the opposite of the usual order: the most ancestral commit (typically, the oldest commit) is shown first, not last.\n\n\nCan I get it the other way around, for consistency with the usual sort order of `git log`? That is, I'd like the todo list shown in the default sort order of `git log`, and I'd like the todo list to still be understood by Git after I've edited it, without having to manually re-reverse it.","questionMetadata":{"type":"implementation","tag":"bash","level":"intermediate"},"answer":"You can use a custom `sequence.editor` config:\n\n\n\n```\ngit -c sequence.editor='tac \"$1\" > \"$1.reverse\" && \"$(git config core.editor || echo \"${GIT_EDITOR:-${EDITOR:-vi}}\")\" \"$1.reverse\" && tac \"$1.reverse\" > \"$1\"' rebase -i ...\n\n```\n\nIt's easy to configure a reusable alias for that and use it:\n\n\n\n```\ngit config alias.revbase \"!git -c sequence.editor='tac \\\"\\$1\\\" > \\\"\\$1.reverse\\\" \\\n&& \\\"\\$(git config core.editor || echo \\\"\\${GIT_EDITOR:-${EDITOR:-vi}}\\\")\\\" \\\"\\$1.reverse\\\" \\\n&& tac \\\"\\$1.reverse\\\" > \\\"\\$1\\\"' rebase -i\"\ngit revbase @~10\n\n```\n\nHere's an approach that keeps the commented-out interactive-rebase hints on the bottom, and changes the line \"top to bottom\" to \"bottom to top\".\n\n\n\n```\nfunction gri {\n local cmd='\n grep -v \"^#\\|^\\$\" \"$1\" | tac >\"$1.reverse\" &&\n grep \"^#\\|^\\$\" \"$1\" | sed \"s\/top to bottom\/bottom to top\/\" >\"$1.hints\" &&\n cat \"$1.reverse\" \"$1.hints\" >\"$1.gri\" &&\n $EDITOR \"$1.gri\" &&\n tac \"$1.gri\" >\"$1\"'\n git -c sequence.editor=\"$cmd\" rebase -i \"$@\"\n}"}
{"questionId":"7059c8db780c898f8654fc339d3b609673b3ee50ce75feb2c6dca1d85c3da2ef","question":"Invalid version number in '-target arm64-apple-ios9999'\nXcode automatically updated yesterday and now I can't build my app anymore.\n\n\n- Xcode version is 15.4\n- clang version is 15.0.0\n\n\nAnd when I try to run\/build the app this is the error I get:\n\n\n\n> \n> \/clang:1:1 invalid version number in '-target arm64-apple-ios9999'\n> \n> \n> \n\n\nThat's it. The weirdest part is the `9999` number there, I can't find it anywhere in the code, must be something Xcode is setting.\n\n\nI tried running `softwareupdate --all --install --force` without success (it ended up updating my MacOS, but still same error).\n\n\nInstalling and running from Xcode 15.3 works just fine.","questionMetadata":{"type":"version","tag":"swift","level":"intermediate"},"answer":"I was on Sentry `8.24.0`; upgrading to `8.26.0` fixed it for me."}
{"questionId":"2cd38244f4b3ea2da80586467a96f46a60e9333884f80468793a105cb76e3e57","question":"Using Postgres 16 with Spring Boot 3.3.0\nI just upgraded from spring-boot 3.2.3 -> 3.3.0. After the upgrade flyway refuses to connect to postgres:\n\n\n\n```\nCaused by: org.flywaydb.core.api.FlywayException: Unsupported Database: PostgreSQL 16.2\n at org.flywaydb.core.internal.database.DatabaseTypeRegister.getDatabaseTypeForConnection(DatabaseTypeRegister.java:105)\n at org.flywaydb.core.internal.jdbc.JdbcConnectionFactory.<init>(JdbcConnectionFactory.java:73)\n at org.flywaydb.core.FlywayExecutor.execute(FlywayExecutor.java:134)\n at org.flywaydb.core.Flyway.migrate(Flyway.java:147)\n\n```\n\nWhat is the expected way to connect to postgres 16 using spring-boot 3.3.0 and flyway?","questionMetadata":{"type":"version","tag":"java","level":"intermediate"},"answer":"There is [pinned issue](https:\/\/github.com\/flyway\/flyway\/issues\/3780) to announce about extracting database support out from `flyway-core`.\n\n\nTry to add this dependency to your project:\n\n\n\n```\n<dependency>\n <groupId>org.flywaydb<\/groupId>\n <artifactId>flyway-database-postgresql<\/artifactId>\n<\/dependency>"}
{"questionId":"2695ac2c79464890f0ba11759d14392d80145b022e105d7d3f9eb0fddb454309","question":"StringIndexOutOfBoundException occurs when typing anything into a JavaFX TextField in both JDK21 and JDK8, Windows11\nWhen I run this simple code snippet of a JavaFX TextField element, I type something into the text field, and then StringIndexOutOfBoundsException is thrown periodically.\n\n\n### versions\n\n\n\n```\nJDK: 21.0.0, 21.0.2, 1.8\nJavaFX: \n`javafx.runtime.version=8.0.65 javafx.runtime.build=b17`\n`javafx.version=21 javafx.runtime.version=21+31 javafx.runtime.build=31`\nWindows:\n`Edition=Windows 11 Pro, Version=23H2`\n\n```\n\n## Error Message\n\n\n\n```\nException in thread \"JavaFX Application Thread\" java.lang.StringIndexOutOfBoundsException: Range [1, -2147483648) out of bounds for length 1\n at java.base\/jdk.internal.util.Preconditions$1.apply(Preconditions.java:55)\n at java.base\/jdk.internal.util.Preconditions$1.apply(Preconditions.java:52)\n at java.base\/jdk.internal.util.Preconditions$4.apply(Preconditions.java:213)\n at java.base\/jdk.internal.util.Preconditions$4.apply(Preconditions.java:210)\n at java.base\/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:98)\n at java.base\/jdk.internal.util.Preconditions.outOfBoundsCheckFromToIndex(Preconditions.java:112)\n at java.base\/jdk.internal.util.Preconditions.checkFromToIndex(Preconditions.java:349)\n at java.base\/java.lang.String.checkBoundsBeginEnd(String.java:4861)\n at java.base\/java.lang.String.substring(String.java:2830)\n at javafx.graphics@21\/com.sun.glass.ui.win.WinTextRangeProvider.GetText(WinTextRangeProvider.java:367)\n at javafx.graphics@21\/com.sun.glass.ui.win.WinApplication._runLoop(Native Method)\n at javafx.graphics@21\/com.sun.glass.ui.win.WinApplication.lambda$runLoop$3(WinApplication.java:185)\n at java.base\/java.lang.Thread.run(Thread.java:1583)\n\n```\n\n## source code\n\n\n\n```\npackage comp3111.qsproject;\n\n\/\/ Java program to create a textfield and add it to stage\nimport javafx.application.Application;\nimport javafx.scene.Scene;\nimport javafx.scene.control.*;\nimport javafx.scene.layout.StackPane;\nimport javafx.stage.Stage;\n\npublic class TextFieldTest extends Application {\n\n \/\/ launch the application\n public void start(Stage s)\n {\n \/\/ set title for the stage\n s.setTitle(\"creating TextField\");\n\n \/\/ create a textfield\n TextField b = new TextField();\n\n \/\/ create a stack pane\n StackPane r = new StackPane();\n\n \/\/ add textfield\n r.getChildren().add(b);\n\n \/\/ create a scene\n Scene sc = new Scene(r, 200, 200);\n\n \/\/ set the scene\n s.setScene(sc);\n\n s.show();\n }\n\n public static void main(String args[])\n {\n \/\/ launch the application\n launch(args);\n }\n}\n\n```\n\nThis problem was not resolved when I reinstalled my JDK.","questionMetadata":{"type":"version","tag":"java","level":"intermediate"},"answer":"Workaround: **Close other running apps**.\n\n\n**Update**: There is an official issue for this: <https:\/\/bugs.openjdk.org\/browse\/JDK-8330462>.\n\n\nAt [JabRef#11151](https:\/\/github.com\/JabRef\/jabref\/issues\/11151#issuecomment-2060779820) it was reported that the [DeepL Windows App](https:\/\/www.deepl.com\/en\/app\/) caused the issue. I tried it on my Windows 10 machine. Having DeepL running: Error appears. DeepL closed: Error gone.\n\n\n\n\n---\n\n\nFor the others to reproduce:\n\n\n1. Go to <https:\/\/www.deepl.com\/en\/app\/>\n2. Download the app\n3. Install the app for the current user\n4. Start the app\n5. Mark some text\n6. Press `Ctrl`+`C`+`C` to check that DeepL really runs\n7. Switch to a JavaFX appp\n8. Enter something in a text field\n9. You should see the exception\n\n\nIf you could reproduce (or not), please share details at <https:\/\/github.com\/koppor\/jfx\/pull\/2>. It seems that not all persons can reproduce and there could be some specific setups. - I personally fired up a fresh Windows on Azure, created another user login (without (!) admin rights), logged in with that user and could reproduce. The issue does not appear if logged in as administrator!"}
{"questionId":"95adc568bbb7607f6b8b70733c94e7b7b928165da977160793a7846a7ca50ae0","question":"Selecting default search engine is needed for Chrome version 127\nAll of my Selenium scripts are raising errors after Chrome updated to version 127 because I always have to select a default search engine when the browser is being launched.\n\n\nI use ChromeDriver 127.0.6533.72.\n\n\nIs anyone experiencing the same issue?","questionMetadata":{"type":"version","tag":"python","level":"beginner"},"answer":"You need to add this Chrome Option to disable the *'choose your search engine'* screen:\n\n\n\n```\noptions.addArguments(\"--disable-search-engine-choice-screen\");\n\n```\n\nIf you are using selenium with Python, you'll have to use:\n\n\n\n```\noptions.add_argument(\"--disable-search-engine-choice-screen\")"}
{"questionId":"ff0fd0467b87b92af79a365983b7141ef0bc3087c844fcaf67f302d9e250c314","question":"What is Api.http file in .NET 8\nRecently I have installed .NET 8 and created the asp.net core project. The project structure is same just like .NET 7 but I don't know what is this Api.http file in this version.\n\n\nI have searched online and found out that this file will be used to test the api endpoints but how ?","questionMetadata":{"type":"version","tag":"c#","level":"beginner"},"answer":"It's a feature provided by Visual Studio 2022 for testing ASP.NET Core projects, particularly API applications. This file serves as a convenient way to send HTTP requests and view responses directly within Visual Studio.\n\n\nThe Visual Studio 2022 \".http\" file editor allows you to create and update \".http\" files within your project. These files can be used to define HTTP requests that you want to send to your API endpoints.\n\n\nWithin the \".http\" file, you can specify the details of HTTP requests such as the URL, HTTP method **(GET, POST, etc.)**, **headers, query parameters, request body,** etc.\n\n\nAfter sending a request from the \".http\" file, Visual Studio displays the response directly within the editor. This allows you to view the response status code, headers, and body, making it easy to debug and test your API endpoints.\n\n\nYou can also check the completed Microsoft official documentation of using http files.\n\n\n<https:\/\/learn.microsoft.com\/en-us\/aspnet\/core\/test\/http-files?view=aspnetcore-8.0>"}
{"questionId":"16875c14518a987d7bd9955a37cf98a35533242797b54ab49d9e8bb4c37bacea","question":"Placing randomly generated numbers into random positions in a row range\nI would like to generate 5 random number between 1 and 49 and place them into a row range like A1:AA1 in 5 random positions. The empty cells should get a value of 0.\n\n\nThe basic concept is `=RANDARRAY(1,27,1,49,TRUE)`. This almost works, but it fills in all the 27 cells. I need to somehow fill only 5 randomly chosen cells out of the 27 (e.g.: A1, G1, L1, M1, X1).\n\n\nHow can this be done?","questionMetadata":{"type":"implementation","tag":"other","level":"intermediate"},"answer":"Try this formula:\n\n\n\n```\n=LET(values,RANDARRAY(1,27,1,49,TRUE),\nrandom5, TAKE(SORTBY(SEQUENCE(27),RANDARRAY(27,,1,27,TRUE)),5),\nMAP(values,SEQUENCE(1,27),LAMBDA(v,i,IF(ISNA(MATCH(i,random5,0)),0,v))))\n\n```\n\nIt erases from the initial 27 values all but 5 - which have been created randomly\n\n\n`random5`: sorts a sequence of 27 randomly and takes top 5 values.\n\n\nThese values are then used as index of those that are kept - all other values are returned as 0."}
{"questionId":"31bb3dfa5e9217e5335c800add94d2c7d4a29222bfc3755fbf5c7859999af00e","question":"Compare number to string with ranges and single numbers\nI have the following tibble\n\n\n\n```\nlibrary(tidyverse)\ntest <- tibble(A = c(\"1994:2020, 2021\"), B = 1995)\n\n```\n\nI would like to check if the year in B is in the years given in column A. The years in column A are a string (the data is read from an Excel file).\nThe following clearly doesn't work (it gives \"No\", but I would like to have \"Yes\"):\n\n\n\n```\ntest %>%\n mutate(InA = ifelse(B %in% A, \"Yes\", \"No\"))\n\n> test\n# A tibble: 1 x 2\n A B\n <chr> <dbl>\n1 1994:2020, 2021 1995\n\n```\n\nI assume that I have to separate the string in A. However, A can contain more than one range and\/or more than one year (e.g. ( \"1994:2012, 2014, 2016:2020, 2021\") and using \"separate\" for different structures gets complicated. Perhaps there is more straightforward way.","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"Tidyverse equivalent of @SamR\u2019s strategy\n\n\n\n```\nlibrary(tidyverse)\n\ntest <- tibble(A = c(\"1994:2020, 2021\"), B = 1995)\n\ntest %>%\n mutate(InA = map2_lgl(A, B, ~ .y %in% eval(str2lang(\n paste0(\"c(\", .x, \")\")\n ))))\n#> # A tibble: 1 \u00d7 3\n#> A B InA \n#> <chr> <dbl> <lgl>\n#> 1 1994:2020, 2021 1995 TRUE\n\n```\n\nCreated on 2024-03-08 with [reprex v2.0.2](https:\/\/reprex.tidyverse.org)"}
{"questionId":"157b67ecf7d9336c1bc5d9c352db81a572bf79389a9968ec45593b970a82d8a5","question":"Logical AND (&&) does not short-circuit correctly in #if\nFor code:\n\n\n\n```\n#if defined(FOO) && FOO(foo)\n #error \"FOO is defined.\"\n#else\n #error \"FOO is not defined.\"\n#endif\n\n```\n\nMSVC 19.38 prints:\n\n\n\n```\n<source>(1): warning C4067: unexpected tokens following preprocessor directive - expected a newline\n<source>(4): fatal error C1189: #error: \"FOO is not defined.\"\n\n```\n\nICX 2024.0.0 and Clang 18.1 prints:\n\n\n\n```\n<source>:1:21: error: function-like macro 'FOO' is not defined\n 1 | #if defined(FOO) && FOO(foo)\n | ^\n<source>:4:6: error: \"FOO is not defined.\"\n 4 | #error \"FOO is not defined.\"\n | ^\n2 errors generated.\n\n```\n\nGCC 14.1 prints:\n\n\n\n```\n<source>:1:24: error: missing binary operator before token \"(\"\n 1 | #if defined(FOO) && FOO(foo)\n | ^\n<source>:4:6: error: #error \"FOO is not defined.\"\n 4 | #error \"FOO is not defined.\"\n | ^~~~~\nCompiler returned: 1\n\n```\n\nWhy does every compiler but MSVC print an error about an undefined macro when `FOO` is not defined (although MSVC prints a warning too)? Is there some special semantic that I am not seeing here?\n\n\n`FOO(foo)` should not be evaluated if `defined(FOO)` evaluates to `false`.","questionMetadata":{"type":"conceptual","tag":"c","level":"intermediate"},"answer":"If `FOO` is not defined (or is defined but not as a function-like macro), then `FOO(foo)` is a syntax error.\n\n\nThe `#if` directive expects an integer constant expression to follow it (including expressions of the form \"**defined** *identifier*\"). Since `FOO(foo)` can't be expanded due to `FOO` not being defined, this is not an integer constant expression.\n\n\nYou would get a similar error for something like this:\n\n\n\n```\nint main()\n{\n int x = some_valid_expression && undeclared_identifier;\n return 0;\n}\n\n```\n\nTo do what you want, you need to break up the `#if` directive into multiple ones:\n\n\n\n```\n#if defined(FOO)\n #if FOO(foo)\n #error \"FOO is defined and non-zero.\"\n #else\n #error \"FOO is zero.\"\n #endif\n#else\n #error \"FOO is not defined.\"\n#endif"}
{"questionId":"9421485f7e7fbb78c958aefe409cc05790df3df4ce20b312608eb417ce0640ee","question":"Ambiguity in method references\nConsider the following snippet:\n\n\n\n```\npublic static void main(String[] args) {\n Function<String, String> function = String::toUpperCase; \/\/OK\n\/\/ Comparator<String> comparator = String::toUpperCase; \/\/Compilation error(makes sense, as String.toUpperCase(Locale locale) & String.toUpperCase() are not compatible)\n fun(String::toUpperCase); \/\/ java: reference to fun is ambiguous\n }\n\n public static void fun(Function<String, String> function) { \/\/ String apply(String obj)\n System.out.println(\"Function\");\n }\n\n public static void fun(Comparator<String> comparator) { \/\/ int compare(String s1, String s2)\n System.out.println(\"Comparator\");\n }\n\n```\n\nI'm failing to understand the reason behind ambiguity error for method invocation `fun(String::toUpperCase)`.\n\n\nAs, both of the overloaded versions of `String::toUpperCase` themselves are not compatible with `int compare(String s1, String s2)` from the Comparator class, then how come the compiler complains about ambiguity in the first place?\n\n\nAm I missing something here?","questionMetadata":{"type":"conceptual","tag":"java","level":"intermediate"},"answer":"toUpperCase` has an overload that takes no parameters, and another overload that takes one parameter (`Locale`).\n\n\nThis makes the expression `String::toUpperCase` an [*inexact* method reference expression](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se21\/html\/jls-15.html#jls-15.13.1). The expression could either refer to the no-argument overload, or the one-argument overload.\n\n\nBoth of the two `fun`s are determined to be [\"potentially applicable\"](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se21\/html\/jls-15.html#jls-15.12.2.1), specifically because of this clause:\n\n\n\n> \n> - A method reference expression is potentially compatible with a functional interface type `T` if, where the arity of the\n> function type of `T` is n, there exists at least one potentially\n> applicable method when the method reference expression targets the\n> function type with arity n, and one of the following is true:\n> \n> \n> \t- The method reference expression has the form `ReferenceType :: [TypeArguments] Identifier` and at least one potentially applicable\n> \tmethod is either (i) static and supports arity n, or (ii) not static\n> \tand supports arity n-1.\n> \t- [irrelevant]\n> \n> \n> \n\n\n`String::toUpperCase` is potentially compatible with `Function<String, String>`, because `Function<String, String>` has arity 1 (takes one parameter), and `toUpperCase` is non-static and has a no-argument overload.\n\n\n`String::toUpperCase` is potentially compatible with `Comparator<String>`, because `Comparator<String>` has arity 2, and `toUpperCase` is non-static and has a one-argument overload. Note that this step does not check the parameter types or return types at all. It doesn't matter that the parameter type is `Locale` but `String` is actually expected.\n\n\nAfter finding the potentially applicable methods, we go on to [Identify Matching Arity Methods Applicable by Strict Invocation](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se21\/html\/jls-15.html#jls-15.12.2.2). This is where things go wrong. Remember how `String::toUpperCase` is an *inexact* method reference? That means it's not pertinent to applicability - the compiler doesn't consider the method reference at all during this step. See also [this other question](https:\/\/stackoverflow.com\/q\/77918209\/5133585) that also involves an inexact method reference expression causing overload resolution errors.\n\n\nSo both `fun`s are applicable by strict invocation. The next step is to [find the most specific method](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se21\/html\/jls-15.html#jls-15.12.2.5). This step considers subtyping, like `String` is more specific than `Object`. But `Comparator<String>` is unrelated to `Function<String, String>`, so we cannot find the *most* specific method, and an error occurs."}
{"questionId":"b347a07613361a2631ce73888981d70d08113ece264729b4a38c1d99851530a8","question":"How can I access a method that has the same name as an enum variant?\nThis is not a particularly practical question - I just stumbled across this in Rust that I was surprised compiles, and want to understand more what is going on.\n\n\nIt appears that you can make an enum with a tuple variant AND implement a method with the same name (and signature, although that doesn't appear to matter).\n\n\n\n```\n#[derive(Debug)]\nenum Test{\n Foo,\n Bar(u64)\n}\n\nimpl Test {\n \/\/\/ This function won't get called\n pub fn Bar(x: u64) -> Self {\n println!(\"In function {:?}\", x);\n Self::Foo\n } \n}\n\nfn main() {\n let a = Test::Bar(10);\n println!(\"{:?}\", a)\n}\n\n```\n\nSince it does compile it led me to suspect that the method must somehow live in a different \"namespace\" to the variant as far as the compiler is concerned, which makes me suspect there would be a way to disambiguate the call. I couldn't find anything in the documentation or by searching though.","questionMetadata":{"type":"version","tag":"rust","level":"intermediate"},"answer":"The method is completely uncallable.\n\n\nIt was apparently callable from 1.0 to 1.33 as `<Test>::Bar`, then an error from 1.34 to 1.36, and since then, has always used the variant: [Enum variants completely shadow inherent associated functions. #48758](https:\/\/github.com\/rust-lang\/rust\/issues\/48758)"}
{"questionId":"68bdcf60ccf5a9441c6076db0329eff50b5048cbb152eec23ec9f6ac4580500f","question":"Why can't I pass std::isfinite as a predicate function?\nI'm trying to pass a specific overload of `std::isfinite()` to a function, but GCC refuses:\n\n\n\n```\n0.cpp:9:24: error: no matching function for call to \u2018all_of(std::array<double, 2>::const_iterator, std::array<double, 2>::const_iterator, <unresolved overloaded function type>)\u2019\n 9 | return !std::all_of(a.begin(), a.end(), std::isfinite<double>);\n | ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n```\n\nHere's the source:\n\n\n\n```\n#include <algorithm>\n#include <array>\n#include <cmath>\n\nint main()\n{\n auto const a = std::array<double, 2>{{0.0, 1.0}};\n return !std::all_of(a.begin(), a.end(), std::isfinite<double>);\n}\n\n```\n\nWhy does it consider `std::isfinite<double>` to be an unresolved overloaded function type, and is there a solution simpler than wrapping in a lambda function of my own? I'd prefer not to have to write `[](double x){ return std::isfinite(x); }` if I don't need to.\n\n\nThis is something I came across in some code that was previously compiled with a Microsoft compiler, but which doesn't build for me using GCC.\n\n\nIf it matters, I see the same symptom with all the standards versions I tried: `-std=c++11`, `-std=c++17` and `-std=c++23`.","questionMetadata":{"type":"debugging","tag":"c++","level":"intermediate"},"answer":"Generally you cannot count on the absence of other overloads in the standard library.\n\n\n\n\n---\n\n\nThis also means that functions in the standard library cannot be taken their address, unless they are explicitly marked as addressable functions.\n\n\nAlso for custom functions, there is no such thing as a \"pointer to an overload set\". In the presence of different overloads, to get a pointer you must either pick one of the overloads:\n\n\n\n```\n void foo(int);\n void foo(double);\n auto ptr = &foo; \/\/ error\n auto ptr = static_cast<void(*)(int)>(foo); \/\/ ok\n\n```\n\nOr defer overload resolution to when the function is actually called (see below).\n\n\n\n\n---\n\n\nFrom [cppreference](https:\/\/en.cppreference.com\/w\/cpp\/numeric\/math\/isfinite) about `std::isfinite`:\n\n\n\n> \n> Additional overloads are provided for all integer types, which are treated as double.\n> \n> \n> \n\n\nand\n\n\n\n> \n> The additional overloads are not required to be provided exactly as (A). They only need to be sufficient to ensure that for their argument `num` of integer type, `std::isfinite(num)` has the same effect as `std::isfinite(static_cast<double>(num))`.\n> \n> \n> \n\n\nYou can wrap it inside a lambda:\n\n\n\n```\nstd::all_of(a.begin(), a.end(),[](auto x){ return std::isfinite(x);});"}
{"questionId":"1ae660dc588ab2f021a2ca8dd1958383961c2f5937e819fe38d5960fa77cfc5b","question":"Vue \/ Volar extension in Visual Studio Code keeps crashing: The JS\/TS language service immediately crashed 5\u00a0times\u2026\nI just wasted a day on this, so I thought I would get it down incase anyone else is experiencing it.\n\n\nOpened a `vue` 3 project and got the following error:\n\n\n\n> \n> The JS\/TS language service immediately crashed 5 times. The service\n> will not be restarted....\n> \n> \n> \n\n\nAnd then it listed a bunch of possible extensions that might be causing it - one of which is `Vue.volar`.\n\n\nDisabling the `Vue - Official` extension does stop the crash, but then we have no `vue` or `TypeScript` language services in VS Code.\n\n\nI narrowed it down to destructuring objects in HTML attributes in a `vue` template:\n\n\n\n```\n <RouterView v-slot=\"{Component}\">\n <Transition name=\"fade\" appear>\n <component :is=\"Component\" \/>\n <\/Transition>\n <\/RouterView>\n\n```\n\nIf you change that to\n\n\n\n```\n <RouterView v-slot=\"props\">\n <Transition name=\"fade\" appear>\n <component :is=\"props.Component\" \/>\n <\/Transition>\n <\/RouterView>\n\n```\n\nIts fine - but a workaround....","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"Turns out the issue started in the `2.0` release of `Vue - Official`\n\n\nYou can roll `Vue - Official` back to older versions.\n\n\n- Search for `Vue - Official` in the VS Code extensions panel.\n- Hit the cog icon\n- choose `Install another version...`\n\n\nFor me, the earliest working build was `1.8.27"}
{"questionId":"365f4a2fdb1a56d88a5a293867450112e5716cbcd9589094b05c005d575ab720","question":"Do I still need OnPush if my app is Zoneless?\nI have migrated my app to zoneless thanks to `provideExperimentalZonelessChangeDetection()` and having a mix of signals and Observables +`AsyncPipe`.\n\n\nDo I still need the `OnPush` ChangeDetection Strategy ?","questionMetadata":{"type":"conceptual","tag":"typescript","level":"intermediate"},"answer":"## TL;DR\n\n\nYes. \n\nJust as with zone-based change detection, it prevents your components from being checked if it's not needed, and thus increases the performance of each CD.\n\n\n\n\n---\n\n\n## Thorough explanation\n\n\nComponents using the `OnPush` change detection strategy will be checked by change detection if the parent was checked and if either:\n\n\n- The component was marked dirty (via `markForCheck()`\/`AsyncPipe`)\n- One of the input references changed\n- An event listener in the template fires\n\n\nWe can say the `OnPush` strategy decides **which** component will be checked by CD.\n\n\nAngular apps also need to decide when the tick is fired from the `ApplicationRef`. This is what we call scheduling. **When** is CD actually starting?\n\n\nIn Zone apps, Zone.js is the scheduler by the means of patching all the async APIs (`setTimeout()`, Promise, `addEventListener()`, etc). When one of those is called, a CD is scheduled.\n\n\nIn zoneless apps, this is no longer possible as no APIs are monkey patched. The framework needs to find another way to schedule CD. Today it uses following:\n\n\n- Signal updates (`set()` or `update()`)\n- `markForCheck()` or `AsyncPipe`\n- An event listener in the template fires\n\n\nTo sum-up:\n\n\n- Zoneless scheduling is about **when** components are checked\n- `OnPush` is about **which** component is checked\n\n\nAlso to make things clear, `OnPush` is not the default when using Zoneless scheduling."}
{"questionId":"c50ad829c3e1df5ef3319273b588121265b4e1c096dcbe353049e5832d38f1a3","question":"Angular 18 Polyfills warning\nI just upgraded to Angular 18 and I get the following warning when I do `ng serve`:\n\n\n\n```\n\u25b2 [WARNING] Polyfill for \"@angular\/localize\/init\" was added automatically. [plugin angular-polyfills]\n\n In the future, this functionality will be removed. Please add this polyfill in the \"polyfills\" section of your \"angular.json\" instead.\n\n\n```\n\n**angular.json**\n\n\n\n```\n \"build\": {\n \"builder\": \"@angular-devkit\/build-angular:browser-esbuild\",\n \"options\": {\n \"outputPath\": \"dist\/MyClientApp\",\n \"main\": \"src\/main.ts\",\n \"index\": \"src\/index.html\",\n \"polyfills\": [\n \"src\/polyfills.ts\"\n ],\n \"tsConfig\": \"tsconfig.app.json\",\n\n```\n\n**polyfills.ts** contains the line:\n\n\n\n```\n...\nimport '@angular\/localize\/init';\n...\n\n```\n\nWhat could be missing? I have in `package.json' the \"@angular\/localize\": \"^18.0.3\",","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"Add `\"@angular\/localize\/init\"` to the `polyfills` array in your `angular.json`.\n\n\nYou can then delete your polyfills file if it only contains `import '@angular\/localize\/init';` When I did this the warning went away."}
{"questionId":"2c346e6cd5c385bf22c3a9f7e3b612b859a6ccc0001bc1485ca0728e12874591","question":"How do you work with Non-sendable types in swift?\nI'm trying to understand how to use an Apple class, without Sendable, in an async context without getting warnings that this won't work in Swift 6.\n\n\nWeapon of choice is `NSExtensionContext` which I need to fetch a URL that's been passed into a share extension.\n\n\nThis is my original code that simply fetches a URL from the extension context. With the new concurrency checking enabled it gives the warning:\n\n\n\n> \n> Main actor-isolated property 'url' can not be mutated from a Sendable closure; this is an error in Swift 6\n> \n> \n> \n\n\n\n```\nclass ShareViewController: UIViewController {\n\n override func viewDidLoad() {\n super.viewDidLoad()\n fetchURL()\n }\n\n private func fetchURL() {\n \n guard let extensionContext = self.extensionContext,\n let item = extensionContext.inputItems.first as? NSExtensionItem,\n let attachments = item.attachments else { return }\n \n for attachment in attachments {\n \n if attachment.hasItemConformingToTypeIdentifier(\"public.url\") {\n attachment.loadItem(forTypeIdentifier: \"public.url\") { url, error in\n \n guard let url else { return }\n \n self.url = url as? URL <-- WARNING! Main actor-isolated property 'url' can not be mutated from a Sendable closure; this is an error in Swift 6\n }\n }\n }\n }\n}\n\n```\n\nI understand the function is called on the `MainActor` but the `extensionContext` can be used on any actor which is the reason for the complaint.\n\n\nFirstly can I perhaps mark the url property as `Sendable` so it can be modified from any actor?\n\n\nTrying something different, I modified it to use the latest `async\/await` versions of `extensionContect`.\n\n\n\n```\noverride func viewDidLoad() {\n super.viewDidLoad()\n \n Task {\n await fetchURL()\n }\n}\n\nprivate func fetchURL() async {\n \n guard let extensionContext = self.extensionContext,\n let item = extensionContext.inputItems.first as? NSExtensionItem,\n let attachments = item.attachments else { return }\n \n for attachment in attachments {\n \n if let url = try? await attachment.loadItem(forTypeIdentifier: \"public.url\") as? URL { <-- WARNINGS!\n self.url = url\n }\n }\n}\n\n```\n\nThis actually gives me 4 warnings on the same line!\n\n\n\n> \n> Non-sendable type 'any NSSecureCoding' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary\n> \n> \n> Passing argument of non-sendable type '[AnyHashable : Any]?' outside of main actor-isolated context may introduce data races\n> \n> \n> Passing argument of non-sendable type '[AnyHashable : Any]?' outside of main actor-isolated context may introduce data races\n> \n> \n> Passing argument of non-sendable type 'NSItemProvider' outside of main actor-isolated context may introduce data races\n> \n> \n> \n\n\nLet's try detaching the Task so it runs on a new actor:\n\n\n\n```\nprivate func fetchURL() async {\n \n guard let extensionContext = self.extensionContext,\n let item = extensionContext.inputItems.first as? NSExtensionItem,\n let attachments = item.attachments else { return }\n \n Task.detached {\n for attachment in attachments { <-- WARNING! Capture of 'attachments' with non-sendable type '[NSItemProvider]' in a `@Sendable` closure\n let attachment = attachment\n if let url = try? await attachment.loadItem(forTypeIdentifier: \"public.url\") as? URL {\n await MainActor.run { [weak self] in\n self?.url = url\n }\n }\n }\n }\n}\n\n```\n\nJust the 1 warning with this:\n\n\n\n> \n> Capture of 'attachments' with non-sendable type '[NSItemProvider]' in a `@Sendable` closure\n> \n> \n> \n\n\nFinal try, let's put everything in the detached actor. This requires accessing the `extensionContext` asynchronously using `await`:\n\n\n\n```\nprivate func fetchURL() async {\n \n Task.detached { [weak self] in\n guard let extensionContext = await self?.extensionContext, <-- WARNING! Non-sendable type 'NSExtensionContext?' in implicitly asynchronous access to main actor-isolated property 'extensionContext' cannot cross actor boundary\n let item = extensionContext.inputItems.first as? NSExtensionItem,\n let attachments = item.attachments else { return }\n \n for attachment in attachments {\n let attachment = attachment\n if let url = try? await attachment.loadItem(forTypeIdentifier: \"public.url\") as? URL {\n await MainActor.run { [weak self] in\n self?.url = url\n }\n }\n }\n }\n}\n\n```\n\nWe get the error:\n\n\n\n> \n> Non-sendable type 'NSExtensionContext?' in implicitly asynchronous access to main actor-isolated property 'extensionContext' cannot cross actor boundary\n> \n> \n> \n\n\nI know 1 way to clear all the warnings:\n\n\n\n```\nextension NSExtensionContext: @unchecked Sendable {}\n\n```\n\nThe problem I have with this, is using `@unchecked` seems to be like telling the compiler to just ignore the consequences.\n\n\nWhat would be the correct way to use this `extensionContext` in a `UIViewController` that runs on `@MainActor`?","questionMetadata":{"type":"version","tag":"swift","level":"intermediate"},"answer":"Your second example using the `async` version of `loadItem` is what you're going to want eventually. [SE-0414](https:\/\/github.com\/apple\/swift-evolution\/blob\/main\/proposals\/0414-region-based-isolation.md) (Region based isolation) should fix the warning when it is shipped.\n\n\nIn the meantime, your first example is simply incorrect. It's a race condition, since `loadItem` [does not promise to call its closure on the main actor](https:\/\/developer.apple.com\/documentation\/foundation\/nsitemprovider\/1403900-loaditem#):\n\n\n\n> \n> The block may be executed on a background thread.\n> \n> \n> \n\n\nSwift is correctly warning you about this bug. You can fix it as usual, by moving the actor's update to the actor's context:\n\n\n\n```\nTask { @MainActor in\n self.url = url as? URL\n}\n\n```\n\nThis will leave you with the `NSSecureCoding` warning. That one is because Foundation and UIKit are not yet fully annotated. Until they are, you should import as `@preconcurrency` when you need to. (The compiler will warn you if you add the annotation unnecessarily.)\n\n\n\n```\n@preconcurrency import UIKit\n\n```\n\nWith these two changes, I see no warnings in Xcode 15.3 under \"complete\" concurrency."}
{"questionId":"0c1e51f6f8141e209f2fb014a553da6aadf5cf9c9a9cf70304e3a82dd936832c","question":"Ignore on ProxyClass\\_\\_setInitialized() cannot be added\nI'm trying to update to Symfony 7.0. So far the update was successful. Now, when I try to call one of my endpoints I get the following error:\n\n\n`\"Ignore on \\\"Proxies\\\\__CG__\\\\App\\\\Entity\\\\Role::__setInitialized()\\\" cannot be added. Ignore can only be added on methods beginning with \\\"get\\\", \\\"is\\\", \\\"has\\\" or \\\"set\\\".\"`\n\n\nThe endpoint loads the User entities from Doctrine via the UserRepository. There is no custom query, just the simple `findBy` function is used. The Role is an associated relationship.\nI don't use the `#[Ignore]` attribute anywhere in my code.\n\n\nThe version of Symfony is 7 and I have Doctrine 3.0. I did some research and found that in the `LazyGhostTrait.php` file there is an `#[Ignore]`. So this class is probably causing the issue?\n\n\nIs this a bug in Symfony or do I need some additional configuration somewhere?","questionMetadata":{"type":"version","tag":"php","level":"intermediate"},"answer":"2024-04-29 EDIT: Symfony solved the issue for problematic package symfony\/var-exporter. Upgrade to version ^7.0.7 or ^6.4.7 depending on your Symfony version to get rid of the bug.\n\n\n\n\n---\n\n\nAs pointed out by Jose9988, this is a bug in Symfony, and more precisely by version 7.0.6 of `symfony\/var-exporter` package. A [PR](https:\/\/github.com\/symfony\/symfony\/pull\/54485) fixing this issue has been merged and will be part of the next patch releases.\n\n\nIn the meantime, just downgrade `symfony\/var-exporter` to version 7.0.4 with `composer require \"symfony\/var-exporter:7.0.4\"`\n\n\nEdit : for users using Symfony 6.4, the bug was introduced in version 6.4.6, so just `composer require \"symfony\/var-exporter:6.4.5\""}
{"questionId":"dedf380a1fc51a6cc7d841f3de94edbcbe3f64e49cd6adc99a9e21bc909a65e2","question":"What is the most efficient way to fillna multiple columns with values from other columns in a way that they can be paired with a suffix?\nThis is my DataFrame:\n\n\n\n```\nimport pandas as pd\nimport numpy as np\ndf = pd.DataFrame(\n {\n 'x': [1, np.nan, 3, np.nan, 5],\n 'y': [np.nan, 7, 8, 9, np.nan],\n 'x_a': [1, 2, 3, 4, 5],\n 'y_a': [6, 7, 8, 9, 10]\n\n }\n)\n\n```\n\nExpected output is `fill_na` columns `x` and `y`:\n\n\n\n```\n x y x_a y_a\n0 1.0 6.0 1 6\n1 2.0 7.0 2 7\n2 3.0 8.0 3 8\n3 4.0 9.0 4 9\n4 5.0 10.0 5 10\n\n```\n\nBasically I want to fillna `x` with `x_a` and `y` with `y_a`. In other words each column should be paired with another column that has the suffix `_a` and the column name.\n\n\nI can get this output by using this code:\n\n\n\n```\nfor col in ['x', 'y']:\n df[col] = df[col].fillna(df[f'{col}_a'])\n\n```\n\nBut I wonder if it is the best\/most efficient way? Suppose I got hundreds of columns like these","questionMetadata":{"type":"optimization","tag":"python","level":"intermediate"},"answer":"What about using an Index to select all columns at once and [`set_axis`](https:\/\/pandas.pydata.org\/docs\/reference\/api\/pandas.DataFrame.set_axis.html) to realign the DataFrame:\n\n\n\n```\ncols = pd.Index(['x', 'y'])\ndf[cols] = df[cols].fillna(df[cols+'_a'].set_axis(cols, axis=1))\n\n```\n\n*NB. this is assuming all columns in `cols` and all '\\_a' columns exist. If you're not sure you could be safe and use [`intersection`](https:\/\/pandas.pydata.org\/docs\/reference\/api\/pandas.Index.intersection.html) and [`reindex`](https:\/\/pandas.pydata.org\/docs\/reference\/api\/pandas.DataFrame.reindex.html)*:\n\n\n\n```\ncols = pd.Index(['x', 'y']).intersection(df.columns)\ndf[cols] = df[cols].fillna(df.reindex(columns=cols+'_a').set_axis(cols, axis=1))\n\n```\n\nOr for an approach that is fully independent of explicitly passing input columns and just relying on the suffix (`_a`):\n\n\n\n```\nsuffix = '_a'\n\n# find columns \"xyz\" that have a \"xyz_a\" counterpart\nc1 = df.columns.intersection(df.columns+suffix)\nc2 = c1.str.removesuffix(suffix)\n# select, fillna, update\ndf[c2] = df[c2].fillna(df[c1].set_axis(c2, axis=1))\n\n```\n\nOutput:\n\n\n\n```\n x y x_a y_a\n0 1.0 6.0 1 6\n1 2.0 7.0 2 7\n2 3.0 8.0 3 8\n3 4.0 9.0 4 9\n4 5.0 10.0 5 10\n\n```\n\nExample for which the second approach would be needed:\n\n\n\n```\ndf = pd.DataFrame(\n {\n 'x': [1, np.nan, 3, np.nan, 5],\n 'z': [np.nan, 7, 8, 9, np.nan],\n 'p_a': [1, 2, 3, 4, 5],\n 'y_a': [6, 7, 8, 9, 10]\n\n }\n)"}
{"questionId":"90d6448567e80fc9d47ad4995382384a83c7b48bf8bedd473865273f754ade82","question":"Any way to detect SparcWorks on SunOS?\nI've got a legacy C++ code base which includes the following:\n\n\n\n```\n\/\/ this kludge is required because SparcWorks 3.0.1 under SunOS\n\/\/ includes malloc.h in stdlib.h, and misdeclares free() to take a char*, \n\/\/ and malloc() and memalign() to return char*\n\n```\n\nClearly this is some leftover from ancient C. The comment is followed by prototypes (wrong here on Fedora Linux 39, they clash with the ones glibc-2.38-18 has). I'd like to `#if` that part out cleanly. Any macro that I can use?","questionMetadata":{"type":"implementation","tag":"c++","level":"intermediate"},"answer":"You could try to use **both** these macros:\n\n\n- `__SUNPRO_CC` (for C++) or `__SUNPRO_C` (for C), which are macros typically defined by the **SparcWorks compiler** indicating the usage of Sun C\/C++ compiler;\n- `__sun`, which is a macro defined by the **SunOS platform**, indicating the usage of the Sun compiler.\n\n\n\n```\n#if defined(__SUNPRO_CC) && defined(__sun)\n\/\/ Specific code for SparcWorks compiler on SunOS\n#else\n\/\/ Code for other compilers or platforms\n#endif\n\n```\n\n*Maybe useful reference*: [Sun\u2122 Studio 12: C++ FAQ](https:\/\/docs.oracle.com\/cd\/E19205-01\/820-4155\/c++_faq.html)"}
{"questionId":"0743de57ad9286c420194487a59697af9c2479a91fac7d4bbad1d5145070f9e8","question":"How do I perform pandas cumsum while skipping rows that are duplicated in another field?\nI am trying to use the pandas.cumsum() function, but in a way that ignores rows with a value in the ID column that is duplicated and specifically only adds the last value to the cumulative sum, ignoring all earlier values.\nExample code below (I couldn't share the real code, which is for work).\n\n\n\n```\nimport pandas as pd, numpy as np\nimport random as rand\nid = ['a','b','c','a','b','e','f','a','b','k']\nvalue = [12,14,3,13,16,7,4,6,10,18]\n\ndf = pd.DataFrame({'id':id, 'value':value})\ndf[\"cumsum_of_value\"] = df['value'].cumsum()\ndf[\"desired_output\"] = [\n 12,26,29,30,32,39,43,36,30,48\n]\ndf[\"comments\"] = [\"\"]*len(df)\ndf.loc[df.index==0, \"comments\"]=\"standard cumsum\"\ndf.loc[df.index==1, \"comments\"]=\"standard cumsum\"\ndf.loc[df.index==2, \"comments\"]=\"standard cumsum\"\ndf.loc[df.index==3, \"comments\"]=\"cumsum of rows 1-3, ignore row 0\"\ndf.loc[df.index==4, \"comments\"]=\"cumsum of rows 2-4, ignore rows 0, 1\"\ndf.loc[df.index==5, \"comments\"]=\"cumsum of rows 2-5, ignore rows 0, 1\"\ndf.loc[df.index==6, \"comments\"]=\"cumsum of rows 2-6, ignore rows 0, 1\"\ndf.loc[df.index==7, \"comments\"]=\"cumsum of rows 2,4-7, ignore rows 0, 1, 3\"\ndf.loc[df.index==8, \"comments\"]=\"cumsum of rows 2,5-8, ignore rows 0, 1, 3, 4\"\ndf.loc[df.index==9, \"comments\"]=\"cumsum of rows 2,5-9, ignore rows 0, 1, 3, 4\"\nprint(df)\n\n```\n\nIn this example, there are seven (7) unique values in the ID column (a, b, c ,d, e, f, g), so the cumsum should only ever sum a max of seven (7) records as its output on any row.\n\n\nIs this possible using combinations of functions such as cumsum(), groupby(), duplicated(), drop\\_duplicates(), and avoiding the use of an iterative loop?\n\n\nI've tried the below\n\n\n\n```\ndf[\"duped\"] = np.where(df[\"id\"].duplicated(keep='last'),0,1)\ndf[\"value_duped\"] = df[\"duped\"] * df[\"value\"]\ndf[\"desired_output_attempt\"] = df[\"cumsum_of_value\"] - df[\"value_duped\"]\n\n```\n\nBut it doesn't come close to the correct answer. I can't think of how to get something like this to result in the desired output without iterating.","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"Try:\n\n\n\n```\ndf[\"out\"] = (\n df.groupby(\"id\")[\"value\"].transform(\"diff\").fillna(df[\"value\"]).cumsum().astype(int)\n)\n\nprint(df)\n\n```\n\nPrints:\n\n\n\n```\n id value cumsum_of_value desired_output out\n0 a 12 12 12 12\n1 b 14 26 26 26\n2 c 3 29 29 29\n3 a 13 42 30 30\n4 b 16 58 32 32\n5 e 7 65 39 39\n6 f 4 69 43 43\n7 a 6 75 36 36\n8 b 10 85 30 30\n9 k 18 103 48 48"}
{"questionId":"d8eb828e648f8d66f9299054e76564968e4041d468a47fd0a3c3aa0b484b79c2","question":"Angular 18: ng build without browser folder\nI am upgrading my Angular 17 application to Angular 18 and want to migrate to the new `application` builder.\n\n\nI am using `ng update @angular\/core@18 @angular\/cli@18` and opted in to the new `application` builder when I was asked. Next, I updated the `angular.json` file so that the browser build's location is using `dist\/project-x` instead of `dist\/project-x\/browser` as suggested by the update process:\n\n\n\n> \n> The output location of the browser build has been updated from `dist\/project-x` to `dist\/project-x\/browser`. You might need to adjust your deployment pipeline or, as an alternative, set `outputPath.browser` to `\"\"` in order to maintain the previous functionality.\n> \n> \n> \n\n\nHere is an extract of my `angular.json` file:\n\n\n\n```\n{\n \"$schema\": \".\/node_modules\/@angular\/cli\/lib\/config\/schema.json\",\n \"version\": 1,\n \"newProjectRoot\": \"projects\",\n \"projects\": {\n \"project-x\": {\n \/\/ ...\n \"architect\": {\n \"build\": {\n \"builder\": \"@angular-devkit\/build-angular:application\",\n \"options\": {\n \"outputPath\": {\n \"base\": \"dist\/project-x\",\n \"browser\": \"\"\n },\n \/\/ ...\n },\n \/\/ ...\n \"configurations\": {\n \/\/ ...\n \"development\": {\n \/\/ ...\n \"outputPath\": {\n \"base\": \"dist\/project-x\",\n \"browser\": \"\"\n }\n }\n \/\/ ...\n\n```\n\n`ng build`, `ng build --configuration development` and `ng build --configuration production` **works as expected**.\n\n\n**However**, when overriding the output path in the command line, then it does not work as expected.\n\n\nThe command below, will create a folder `browser` in `\/projects\/project-x-backend\/`:\n\n\n\n```\nng build --base-href=\/x\/ --output-path=\/projects\/project-x-backend\/wwwroot \\\n --watch --configuration development --verbose\n\n```\n\n**How can I get rid of the `browser` folder when using `ng build --watch` with a custom output path?** (I would like to avoid setting the output path for the `development` configuration to `\/projects\/project-x-backend\/wwwroot` in `angular.json` itself.)","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"I found a solution that works for me by adding an additional configuration (without needing to modify the `development` configuration) in `angular.json` and use the configuration in the command line:\n\n\n\n```\n{\n \"$schema\": \".\/node_modules\/@angular\/cli\/lib\/config\/schema.json\",\n \"version\": 1,\n \"newProjectRoot\": \"projects\",\n \"projects\": {\n \"project-x\": {\n \/\/ ...\n \"architect\": {\n \"build\": {\n \/\/ ...\n \"configurations\": {\n \/\/ ...\n \"development\": {\n \/\/ ...\n },\n\n \"dev-watch\": {\n\n \/\/ ... duplicate other configuration from development\n \/\/ then specify another outputPath:\n\n \"outputPath\": {\n \"base\": \"\/projects\/project-x-backend\/wwwroot\",\n \"browser\": \"\"\n }\n\n }\n \/\/ ...\n\n```\n\nAnd then I execute `ng build` with:\n\n\n\n```\nng build --base-href=\/x\/ --watch --configuration dev-watch --verbose"}
{"questionId":"974144b6dc8deaf7f1ed5127831c3b2f4325389576c413ee156b1d7292deb4d4","question":"Does a constructor parameter of a nested class shadow members of the enclosing class?\nclass A\n{\n private:\n int a;\n\n public:\n class B\n {\n public:\n B(int a) : b(a) {}\n\n int b;\n };\n};\n\n\nint main(void)\n{\n return 0;\n}\n\n```\n\nclang (-Weverything) warns:\n\n\n\n```\nt.cpp(10,15): warning: constructor parameter 'a' shadows the field 'a' of 'A' [-Wshadow-field-in-constructor]\n 10 | B(int a) : b(a)\n | ^\nt.cpp(4,9): note: previous declaration is here\n 4 | int a;\n\n```\n\nI know that since C++11 nested classes have access to outer classes as if they were friends, but `B` is just declared inside `A` (there is no member object of `B` in `A`, how can `B` constructor param `a` shadow `A` member `a` ?","questionMetadata":{"type":"conceptual","tag":"c++","level":"intermediate"},"answer":"> \n> Does a constructor parameter of a nested class shadow members of the enclosing class?\n> \n> \n> \n\n\nYes.\n\n\nName lookup needs no `A` instance in `B`. Because `B` is nested inside `A`, unqualified name lookup finds `A::a`.\n\n\nFrom [cppreference](https:\/\/en.cppreference.com\/w\/cpp\/language\/unqualified_lookup):\n\n\n\n> \n> For a name used anywhere in class definition (including base class specifiers and nested class definitions), except inside a member function body, a default argument of a member function, exception specification of a member function, or default member initializer, where the member may belong to a nested class whose definition is in the body of the enclosing class, the following scopes are searched:\n> \n> \n> a) the body of the class in which the name is used until the point of use,\n> \n> \n> b) the entire body of its base class(es), recursing into their bases when no declarations are found,\n> \n> \n> c) **if this class is nested, the body of the enclosing class until the definition of this class and the entire body of the base class(es) of the enclosing class,**\n> \n> \n> [...]\n> \n> \n> \n\n\nc) means that an unqualified unshadowed `a` inside `B` refers to `A::a`. Your code works because `int a` shadows `A::a` and because in the initializer list `b(a)` uses the constructor parameter called `a`.\n\n\nThat the enclosing class is a friend does not matter at this point, because access comes after name lookup. As [Jarod42 pointed out](https:\/\/stackoverflow.com\/questions\/78788649\/does-a-constructor-parameter-of-a-nested-class-shadow-members-of-the-enclosing-c#comment138911710_78788649), you can modify the code (rename the parameter but keep `b(a)`) to get an error because `A::a` is non static.\n\n\n\n\n---\n\n\nThanks to [Artyer for an example](https:\/\/stackoverflow.com\/questions\/78788649\/does-a-constructor-parameter-of-a-nested-class-shadow-members-of-the-enclosing-c#comment138912828_78788717) where the member `a` can actually be used for something:\n\n\n\n```\nstruct A {\n int a;\n struct B {\n public:\n decltype(a) b; \/\/ equivalent to decltype(A::a) b;\n\n B(int a) : b(a) {} \/\/ int a shadows A::a\n \/\/ equivalent to B(int x) : b(x) {}\n };\n};"}
{"questionId":"25985e6195d2a19cf05c980660082948fb1e5c8c8d918314081ac0bbdd7b7327","question":"Find the optimal clipped circle\nGiven a `NxN` integer lattice, I want to find the clipped circle which maximizes the sum of its interior lattice point values.\n\n\nEach lattice point `(i,j)` has a value `V(i,j)` and are stored in the following matrix `V`:\n\n\n\n```\n [[ 1, 1, -3, 0, 0, 3, -1, 3, -3, 2],\n [-2, -1, 0, 1, 0, -2, 0, 0, 1, -3],\n [ 2, 2, -3, 2, -2, -1, 2, 2, -2, 0],\n [-2, 0, -3, 3, 0, 2, -1, 1, 3, 3],\n [-1, -2, -1, 2, 3, 3, -3, -3, 2, 0],\n [-3, 3, 2, 0, -3, -2, -1, -3, 0, -3],\n [ 3, 2, 2, -1, 0, -3, 1, 1, -2, 2],\n [-3, 1, 3, 3, 0, -3, -3, 2, -2, 1],\n [ 0, -3, 0, 3, 2, -2, 3, -2, 3, 3],\n [-1, 3, -3, -2, 0, -1, -2, -1, -1, 2]]\n\n```\n\nThe goal is to maximize the sum of values `V(i,j)` of the lattice points lying on the boundary and within interior of a (clipped) circle with radius `R`, with the assumptions and conditions:\n\n\n- the circle has center at (0,0)\n- the circle can have any positive radius (not necessarily an integer radius, i.e., rational).\n- the circle may be clipped at two lattice points, resulting in a diagonal line as shown in the picture. This diagonal line has a slope of -45 degrees.\n\n\nSome additional details:\n\n\nThe score for a clipped circle is the sum of all the integers that are both within the circle (or on the border) and on the side of the diagonal line including (0,0). The values on (or near) the border are -3, 1, 3, -1, -3, 3, -1, 2, 0, 3.\n\n\nEven though the circle can have any radius, we need only consider circles that intersect a grid point precisely so there are n^2 different relevant radiuses. Further, we need only record one position where the circle intersects with the diagonal line to fully specify the clipped circle. Note that this intersection with the diagonal does not need to be at an integer coordinate.\n\n\nIf the optimal solution doesn't have the diagonal clipping the circle at all then we need only return the radius of the circle.\n\n\nWhat I have found so far:\n\n\nIf we only wanted to find the optimal circle we could do that quickly in time proportional to the input size with:\n\n\n\n```\nimport numpy as np\nfrom math import sqrt\nnp.random.seed(40)\n\ndef find_max(A):\n n = A.shape[0]\n sum_dist = np.zeros(2 * n * n, dtype=np.int32)\n for i in range(n):\n for j in range(n):\n dist = i**2 + j**2\n sum_dist[dist] += A[i, j]\n cusum = np.cumsum(sum_dist)\n # returns optimal radius with its score\n return sqrt(np.argmax(cusum)), np.max(cusum)\nA = np.random.randint(-3, 4, (10, 10))\nprint(find_max(A))\n\n```\n\nHow quickly can the optimal clipped circle be found?","questionMetadata":{"type":"optimization","tag":"python","level":"advanced"},"answer":"Start by creating a cumulative frequency table, or a fenwick tree. You'll have a record for each radius of circle, with value corresponding to explored weights at that distance from the origin. Then, begin a BFS from the origin.\n\n\nFor each diagonal \"frontier\", you'll need to update your table\/tree with the radius:weight key-value pair (add weight to existing value). You'll also need to then query the table\/tree for the current cumulative sum at each radius just added, noting the maximum and updating a global running maximum accordingly.\n\n\nOnce your search terminates, you'll have the maximum sum for your clipped-circle. If you want to reconstruct the circle, just store the max radius and BFS depth along with the global max sum itself.\n\n\nThis will give you your solution in `O(N^2 log N)` time, as there will be N^2 updates and queries, which are `O(log N)` each.\n\n\nThe intuition behind this solution is that by exploring along this diagonal \"frontier\" outward, you implicitly clip all your circles you query since the weights above\/right of it haven't been added yet. By calculating the max (at each search depth) for just the radii that were just updated, you also enforce the constraint that the circles intersect the clipping line at an integer coordinate.\n\n\n**Update**\nHere is python code showing this in action. It needs cleaned up, but at least it shows the process. I opted to use cumulative frequency \/ max arrays, instead of trees, since that'll probably lend itself to vectorization with numpy for OP.\n\n\n\n```\ndef solve(matrix):\n n = len(matrix)\n\n max_radius_sqr = 2 * (n - 1) ** 2\n num_bins = max_radius_sqr.bit_length() + 1\n\n frontier = [(0, 0)]\n\n csum_arr = [[0] * 2 ** i for i in range(num_bins)[::-1]]\n cmax_arr = [[0] * 2 ** i for i in range(num_bins)[::-1]]\n\n max_csum = -float(\"inf\")\n max_csum_depth = None\n max_csum_radius_sqr = None\n\n depth = 0\n\n while frontier:\n next_frontier = []\n\n if depth + 1 < n: # BFS up\n next_frontier.append((0, depth + 1))\n\n # explore frontier, updating csums and maximums per each\n for x, y in frontier:\n if x + 1 < n: # BFS right\n next_frontier.append((x + 1, y))\n\n index = x ** 2 + y ** 2 # index is initially the radius squared\n\n for i in range(num_bins):\n csum_arr[i][index] += matrix[y][x] # update csums\n\n if i != 0: # skip first, since no children to take max of\n sum_left = csum_arr[i-1][index << 1] # left\/right is tree notation of the array\n max_left = cmax_arr[i-1][index << 1]\n max_right = cmax_arr[i-1][index << 1 | 1]\n cmax_arr[i][index] = max(max_left, sum_left + max_right) # update csum maximums\n\n index >>= 1 # shift off last bit, update sums\/maxs again, log2 times\n\n # after entire frontier is explored, query for overall max csum over all radii\n # update running global max and associated values\n if cmax_arr[-1][0] > max_csum:\n max_csum = cmax_arr[-1][0]\n max_csum_depth = depth\n index = 0\n for i in range(num_bins-1)[::-1]: # reconstruct max radius (this could just as well be stored)\n sum_left = csum_arr[i][index << 1]\n max_left = cmax_arr[i][index << 1]\n max_right = cmax_arr[i][index << 1 | 1]\n\n index <<= 1\n if sum_left + max_right > max_left:\n index |= 1\n max_csum_radius_sqr = index\n\n depth += 1\n frontier = next_frontier\n\n # total max sum, dx + dy of diagonal cut, radius ** 2\n return max_csum, max_csum_depth, max_csum_radius_sqr\n\n```\n\nCalling this with the given test case produces the expected output:\n\n\n\n```\n matrix = [\n [-1, 3, -3, -2, 0, -1, -2, -1, -1, 2],\n [ 0, -3, 0, 3, 2, -2, 3, -2, 3, 3],\n [-3, 1, 3, 3, 0, -3, -3, 2, -2, 1],\n [ 3, 2, 2, -1, 0, -3, 1, 1, -2, 2],\n [-3, 3, 2, 0, -3, -2, -1, -3, 0, -3],\n [-1, -2, -1, 2, 3, 3, -3, -3, 2, 0],\n [-2, 0, -3, 3, 0, 2, -1, 1, 3, 3],\n [ 2, 2, -3, 2, -2, -1, 2, 2, -2, 0],\n [-2, -1, 0, 1, 0, -2, 0, 0, 1, -3],\n [ 1, 1, -3, 0, 0, 3, -1, 3, -3, 2],\n ][::-1]\n print(solve(matrix))\n\n# output: 13 9 54\n\n```\n\nIn other words, it says the maximum total sum is `13`, with a diagonal cut stagger (dx + dy) of `9`, and radius squared of `54`.\n\n\nIf I have some time tonight or this weekend, I'll clean up the code a bit."}
{"questionId":"22e263619073b057cce3564de83ded7f5e3fdca3872db4052c887c860f17a382","question":"ASP.NET Core Web API on Azure App Service - POST and PUT requests failing 500 for a short period\nThe web application is a REST API and a SPA. It's maintained and currently on .NET 6.0 and have been working steadily for years.\n\n\nThe requests are CORS and the server is properly configured for this.\n\n\nSuddenly we have several outbursts per day of `POST` and `PUT` requests consistently failing with 500 server error. And they are failing quickly, only 30 ms. This goes on for 5-15 minutes and then returns to normal again.\n\n\nAll `GET` requests still working perfectly fine in between, which is strange.\n\n\nEven stranger these failing requests are not logged, like they never reach the web server. Checked both web server logs (IIS) and ASP.NET Core application exceptions and traces.\n\n\nThe response header `X-Powered-By: ASP.NET` is also missing from these failing requests. Which would be present for normal 500 server errors.\nAlso suggesting the requests never reach the server.\n\n\nThe App Service Plan is only using 30% of it's resources currently.\n\n\nSame behaviour is confirmed across Chrome, Edge and Safari. But you can have the issue in Chrome, while a session in Edge is working flawlessly on the same PC.\n\n\nIf we close the browser and re-open, the issues are gone.\n\n\nIt all started this month: *May 2024*.\n\n\nAlso worth mentioning that we have a DNS load balancer, Azure Traffic Manager.\n\n\nThis only operates on DNS queries and returns the closest of the two instances of the REST API services.\n\n\nTraffic Manager does therefore not log any requests.\n\n\n**Update** confirmed that error also occurs *without* Traffic Manager.\n\n\n**Update 2** We have tested deploying on a brand new App Service. And tried upgrading to .NET8.0. To no avail\n\n\nThat leaves us with browser's network log the only place to inspect. We have exported numbers of HAR files and looked for differences between failing requests and working requests, and found none.\n\n\nHas anyone experienced similar behaviour of any kind?\n\n\nConfiguration of ASP.NET Core in `Startup.cs`:\n\n\n\n```\npublic class Startup\n{\n readonly string _corsPolicyName = \"corsOrigins\";\n public IConfiguration Configuration { get; }\n public IWebHostEnvironment Environment { get; }\n\n private static IConfiguration config;\n public static IConfiguration GetConfiguration()\n {\n return config;\n }\n\n public Startup(IConfiguration configuration, IWebHostEnvironment environment)\n {\n Configuration = configuration;\n config = configuration;\n Environment = environment;\n }\n\n public void ConfigureServices(IServiceCollection services)\n {\n if (Environment.IsDevelopment())\n IdentityModelEventSource.ShowPII = true;\n\n \/\/ logging\n services.AddApplicationInsightsTelemetry();\n services.AddLogging(logging => \n {\n logging.AddSimpleConsole();\n logging.AddAzureWebAppDiagnostics();\n });\n services.Configure<AzureFileLoggerOptions>(options =>\n {\n options.FileName = \"filelog-\";\n options.FileSizeLimit = 50 * 1024;\n options.RetainedFileCountLimit = 5;\n });\n services.Configure<AzureBlobLoggerOptions>(options =>\n {\n options.BlobName = \"Backend.txt\";\n });\n\n \/\/ so our claims will not be translated\n JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();\n\n \/\/ configure languages\n services.Configure<RequestLocalizationOptions>(options =>\n {\n var supportedCultures = new[]\n {\n new CultureInfo(\"en\"),\n new CultureInfo(\"no\")\n };\n options.DefaultRequestCulture = new RequestCulture(\"en\");\n options.SupportedCultures = supportedCultures;\n options.SupportedUICultures = supportedCultures;\n });\n\n \/\/ add Identity\n services.AddIdentity<ApplicationUser, ApplicationRole>()\n .AddEntityFrameworkStores<AuthContext>()\n .AddRoleManager<RoleManager<ApplicationRole>>()\n .AddDefaultTokenProviders();\n services.AddUserAndPasswordPolicies();\n\n services.AddCors(options =>\n {\n {\n options.AddPolicy(_corsPolicyName, builder =>\n builder.SetIsOriginAllowedToAllowWildcardSubdomains()\n .WithOrigins(\"https:\/\/*.our.domain\")\n .AllowAnyMethod()\n .AllowAnyHeader()\n .AllowCredentials());\n }\n });\n\n services.AddAuthentication(options =>\n {\n options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;\n options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;\n }).AddJwtBearer(o =>\n {\n o.MapInboundClaims = false;\n o.Authority = Configuration.GetValue<string>(\"authServerUrl\");\n o.Audience = \"backend\";\n o.RequireHttpsMetadata = true;\n o.SaveToken = true;\n o.TokenValidationParameters = new TokenValidationParameters\n {\n NameClaimType = \"name\",\n RoleClaimType = \"role\",\n ValidateIssuer = true,\n ValidateAudience = true\n };\n });\n\n services.AddControllersWithViews(options =>\n {\n options.Filters.Add<WebCustomExceptionFilter>();\n })\n .AddNewtonsoftJson(options =>\n {\n options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;\n options.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Utc;\n });\n\n services.AddSignalRManager();\n\n services.AddRazorPages()\n .AddRazorRuntimeCompilation();\n\n services.AddSwaggerGenNewtonsoftSupport();\n services.AddSwaggerGen(options =>\n {\n options.SwaggerDoc(\"v1\", new OpenApiInfo { Title = \"server debug api spec\", Version = \"v1\" });\n options.SchemaFilter<EnumSchemaFilter>();\n });\n }\n\n public void Configure(IApplicationBuilder app, IWebHostEnvironment env)\n {\n StaticEnvironment.IsDebug = env.IsDevelopment();\n StaticEnvironment.ContentRootPath = env.ContentRootPath;\n\n app.UseCors(_corsPolicyName);\n\n if (env.IsDevelopment())\n {\n app.UseStaticFiles();\n app.UseDeveloperExceptionPage();\n app.UseSwagger();\n app.UseSwaggerUI(c => {\n c.DocumentTitle = \"Backend Swagger docs\";\n var sidebar = Path.Combine(env.ContentRootPath, \"wwwroot\/sidebar.html\");\n c.HeadContent = File.ReadAllText(sidebar);\n c.InjectStylesheet(\"\/colors.css\");\n c.InjectStylesheet(\"\/style.css\");\n c.SwaggerEndpoint(\"\/swagger\/v1\/swagger.json\", \"server\");\n });\n }\n else\n {\n app.UseHttpsRedirection();\n app.UseHsts();\n }\n\n \/\/ Localization\n app.UseRequestLocalization();\n app.UseRouting();\n\n app.UseAuthentication();\n app.UseAuthorization();\n\n app.UseEndpoints(endpoints =>\n {\n endpoints.MapControllers();\n });\n \n\n \/\/ Enable static files for use in MVC views. \n app.UseStaticFiles();\n app.UseStaticFiles(new StaticFileOptions()\n {\n FileProvider = new PhysicalFileProvider(\n Path.Combine(Directory.GetCurrentDirectory(), @\"Style\")),\n RequestPath = new PathString(\"\/Style\")\n });\n }\n}\n\n```\n\nTypical controller signature:\n\n\n\n```\n[Authorize]\n[ApiController]\n[Route(\"api\/[controller]\")]\npublic class ImprovementController : ControllerBase\n\n```\n\nThe Web client fetch code\n\n\n\n```\nconst fullUrl = `${baseUrl}\/${url}`\nconst bearer = getBearer()\nconst connectionId = getConnectionId()\n \nconst req = {\n method: \"POST\",\n headers: {\n \"Authorization\": bearer,\n \"Content-Type\": \"application\/json\"\n },\n body: JSON.stringify(data)\n}\n \nconst res = await fetch(fullUrl, req)","questionMetadata":{"type":"debugging","tag":"c#","level":"intermediate"},"answer":"There are several tickets raised at Microsoft. Please see <https:\/\/learn.microsoft.com\/en-us\/answers\/questions\/1687258\/our-azure-app-service-application-started-to-exper>\n\n\nWe have the same issue and a lot of others have. Post requests are failing randomly with 500 error and gets are working fine. After a while it works again. Apparently it can be fixed in the meantime by setting the setting of HTTP\/2 back to HTTP\/1.1\n\n\nI hope to confirm this next monday when all our customers are back to work"}
{"questionId":"473c97f59710df5e37e1e8ff473fb9b8e661010ce25162afb99427b84064d225","question":"UserWarning: Plan failed with a cudnnException: CUDNN\\_BACKEND\\_EXECUTION\\_PLAN\\_DESCRIPTOR\nI'm trying to train a model with Yolov8. Everything was good but today I suddenly notice getting this warning apparently related to `PyTorch` and `cuDNN`. In spite the warning, the training seems to be progressing though. I'm not sure if it has any negative effects on the training progress.\n\n\n\n```\nsite-packages\/torch\/autograd\/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ..\/aten\/src\/ATen\/native\/cudnn\/Conv_v8.cpp:919.)\n return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n\n```\n\n**What is the problem and how to address this?**\n\n\nHere is the output of `collect_env`:\n\n\n\n```\nCollecting environment information...\nPyTorch version: 2.3.0+cu118\nIs debug build: False\nCUDA used to build PyTorch: 11.8\nROCM used to build PyTorch: N\/A\nOS: Ubuntu 20.04.6 LTS (x86_64)\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\nClang version: Could not collect\nCMake version: version 3.29.3\nLibc version: glibc-2.31\nPython version: 3.9.7 | packaged by conda-forge | (default, Sep 2 2021, 17:58:34) [GCC 9.4.0] (64-bit runtime)\nPython platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31\nIs CUDA available: True\nCUDA runtime version: 11.8.89\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: \nGPU 0: NVIDIA A100 80GB PCIe\nNvidia driver version: 515.105.01\ncuDNN version: Probably one of the following:\n\/usr\/lib\/x86_64-linux-gnu\/libcudnn.so.8.8.0\n\/usr\/lib\/x86_64-linux-gnu\/libcudnn_adv_infer.so.8.8.0\n\/usr\/lib\/x86_64-linux-gnu\/libcudnn_adv_train.so.8.8.0\n\/usr\/lib\/x86_64-linux-gnu\/libcudnn_cnn_infer.so.8.8.0\n\/usr\/lib\/x86_64-linux-gnu\/libcudnn_cnn_train.so.8.8.0\n\/usr\/lib\/x86_64-linux-gnu\/libcudnn_ops_infer.so.8.8.0\n\/usr\/lib\/x86_64-linux-gnu\/libcudnn_ops_train.so.8.8.0\nHIP runtime version: N\/A\nMIOpen runtime version: N\/A\nIs XNNPACK available: True\nCPU:\nArchitecture: x86_64\n\nVersions of relevant libraries:\n[pip3] numpy==1.26.4\n[pip3] onnx==1.16.0\n[pip3] onnxruntime==1.17.3\n[pip3] onnxruntime-gpu==1.17.1\n[pip3] onnxsim==0.4.36\n[pip3] optree==0.11.0\n[pip3] torch==2.3.0+cu118\n[pip3] torchaudio==2.3.0+cu118\n[pip3] torchvision==0.18.0+cu118\n[pip3] triton==2.3.0\n[conda] numpy 1.24.4 pypi_0 pypi\n[conda] pytorch-quantization 2.2.1 pypi_0 pypi\n[conda] torch 2.1.1+cu118 pypi_0 pypi\n[conda] torchaudio 2.1.1+cu118 pypi_0 pypi\n[conda] torchmetrics 0.8.0 pypi_0 pypi\n[conda] torchvision 0.16.1+cu118 pypi_0 pypi\n[conda] triton 2.1.0 pypi_0 pypi","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"**June 2024 Solution**: Upgrade torch version to 2.3.1 to fix it:\n\n\n`pip3 install torch torchvision torchaudio --index-url https:\/\/download.pytorch.org\/whl\/cu118"}
{"questionId":"c3816c6c02efc8e22fc6db7873a17b7d50f587cdc59a7b696716458fb2cf4eab","question":"The `\\*ngFor` directive was used in the template, but neither the `NgFor` directive nor the `CommonModule` was imported\nI'm new to angular and I can't use the ngFor component. This is the error:\n\n\nThe `*ngFor` directive was used in the template, but neither the `NgFor` directive nor the `CommonModule` was imported. Use Angular's built-in control flow @for or make sure that either the `NgFor` directive or the `CommonModule` is included in the `@Component.imports` array of this component.\n\n\nAnd this is the code:\n\n\n\n```\n<h2>List<\/h2>\n\n<ul>\n <li *ngFor=\"let c of contacts\">\n {{c.id}} - {{c.name}}\n <\/li>\n<\/ul>","questionMetadata":{"type":"version","tag":"typescript","level":"beginner"},"answer":"From Angular 17 with standalone components, things like ngFor need to be included in the `imports` array of your component.\n\n\ne.g.\n\n\n\n```\n@Component({\n selector: 'app-contacts',\n standalone: true,\n imports: [NgFor],\n templateUrl: '.\/contacts.component.html',\n styleUrl: '.\/contacts.component.scss'\n})\n\n```\n\nBut better to use the new control flow syntax:\n\n\n\n```\n<ul>\n @for (c of contacts; track c.id) {\n <li>{{c.id}} - {{c.name}}<\/li>\n }\n<\/ul>"}
{"questionId":"d43d9f71b01f9fef34490802e8c170b6ff932f37fe37b0c597d8287240510977","question":"First std::mutex::lock() crashes in application built with latest Visual Studio 2022\nRecently I installed the latest Visual Studio 2022 v17.10 to build my programs, and initially all went well. But after some other program installation, my programs started failing immediately on start where the first `std::mutex::lock()` is called with the exception\n\n\n\n> \n> 0xC0000005: Access violation reading location 0x0000000000000000.\n> \n> \n> \n\n\nand stack trace:\n\n\n\n```\nmsvcp140.dll!mtx_do_lock(_Mtx_internal_imp_t * mtx, const xtime * target) Line 100 C++\n[Inline Frame] my.dll!std::_Mutex_base::lock()\n[Inline Frame] my.dll!std::unique_lock<std::mutex>::{ctor}(std::mutex &)\n\n```\n\nIt turned out that the installed application was built in a previous version of Visual Studio 2022, and it silently downgraded `C:\\Windows\\System32\\msvcp140.dll` to version `14.34.31931.0`, where the exception happens.\n\n\nI personally managed to restore normal operation of my programs, by pressing `Repair` in `Microsoft Visual C++ 2015-2022 Redistributable (x64) - 14.40.33810`, which restored `C:\\Windows\\System32\\msvcp140.dll` to version `14.40.33810.0`.\n\n\nBut since this unexpected crash on start-up can easily happen to the users of my program, I would like to ask, is there a way to make the programs built in the latest Visual Studio 2022 compatible with runtimes from previous versions of the same Visual Studio 2022? Or at least how to show meaningful error to the users if the runtime is downgraded for some reason instead of silent application termination.","questionMetadata":{"type":"version","tag":"c++","level":"intermediate"},"answer":"Short answer, use the `_DISABLE_CONSTEXPR_MUTEX_CONSTRUCTOR` preprocessor macro.\n\n\nThis nugget of info from the Microsoft\/STL changelog might help us understand the visual studio snafu that unfolded over the past couple days.\n\n\n\n> \n> Fixed mutex's constructor to be constexpr. #3824 #4000 #4339 Note:\n> Programs that aren't following the documented restrictions on binary\n> compatibility may encounter null dereferences in mutex machinery. You\n> must follow this rule: When you mix binaries built by different\n> supported versions of the toolset, the Redistributable version must be\n> at least as new as the latest toolset used by any app component.\n> \n> \n> You can define \\_DISABLE\\_CONSTEXPR\\_MUTEX\\_CONSTRUCTOR as an escape\n> hatch.\n> \n> \n> --- [microsoft\/STL changelog](https:\/\/github.com\/microsoft\/STL\/wiki\/Changelog)\n> \n> \n> \n\n\nI ran into similar issue on a GitHub Actions workflow and arrived here from these independently reported issues in the Visual Studio developer community:\n\n\n1. <https:\/\/developercommunity.visualstudio.com\/t\/All-my-std::unique_lock-crashed-after-th\/10665376?space=41&sort=newest&viewtype=all>\n2. <https:\/\/developercommunity.visualstudio.com\/t\/Access-violation-in-_Thrd_yield-after-up\/10664660>\n3. <https:\/\/github.com\/actions\/runner-images\/issues\/10004>\n\n\nYou can find a detailed breakdown - <https:\/\/github.com\/actions\/runner-images\/issues\/10004#issuecomment-2156109231>"}
{"questionId":"a2169cb795265e0a99c2e6b9199fca94eeb6679e379ffa720fd7069b948dac53","question":"Is there a way to setup a remote quarkus postgresql dev-service?\nDue to security concerns, my company doesn't allows to use containers on our laptops.\n\n\nSo we can't use the normal quarkus:dev to run our test that connects to Postgresql.\n\n\nBut they provides us a remote machine where we can use Podman to run some containers.\n\n\nWhat I'm doing now is to manually ssh to that machine and starting a Postgresql container before running local tests.\n\n\nWhat I would like is to do this automatically, and also find a way to do the same on Jenkins when when we need to run a pipeline to release a new version.","questionMetadata":{"type":"implementation","tag":"java","level":"intermediate"},"answer":"This is a common scenario, and the intention of Quarkus's developer joy features is to allow it to work in a frictionless way, without requiring scripts or manual tunneling.\n\n\nThere are two options, although which one works best for you will depend a bit on how your company's remote podman is set up.\n\n\n1. Remote [dev services](https:\/\/quarkus.io\/guides\/dev-services). (When you run Quarkus in dev mode, the automatic provisioning of unconfigured services is called 'dev services'.) The idea here is that you use [dev services](https:\/\/quarkus.io\/guides\/dev-services) normally, but under the covers, the container client is connecting to the remote instances. For Dev services, Testcontainers provides container connectivity under the covers. This should work transparently as long as `podman run` works. You'd set it up using something like\n\n\n\n```\npodman system connection add remote --identity ~\/.ssh\/my-key ssh:\/\/my-host\/podman\/podman.sock\npodman system connection default remote\n\n```\n\nIf you don't have a local `podman` client, or if the podman connection settings don't sort it out, setting `DOCKER_HOST` to the right remote socket will also tell Testcontainers where to look.\n\n\n1. [Remote dev mode](https:\/\/quarkus.io\/guides\/maven-tooling#remote-development-mode). Here, the whole application is running in a container on the remote server. Changes to your local files are reflected in the remote instance.\n\n\nTo use remote dev mode, you build a special jar and then launch it in the remote environment. Add the following to your `application.properties`:\n\n\n\n```\n%dev.quarkus.package.type=mutable-jar \n\n```\n\nThen build the jar (they could be in the application.properties, but then you couldn't commit it to source control):\n\n\n\n```\nQUARKUS_LIVE-RELOAD_PASSWORD=<arbitrary password> .\/mvnw install\n\n```\n\nThe install will build you a normal `fast-jar` dockerfile. Run it in your remote environment with `QUARKUS_LAUNCH_DEVMODE=true` added to the podman run command.\n\n\nThen, locally, instead of `mvn quarkus:dev`, you'd run `.\/mvnw quarkus:remote-dev -Dquarkus.live-reload.url=http:\/\/my-remote-host:8080`\n\n\n<https:\/\/quarkus.io\/guides\/maven-tooling#remote-development-mode> has a more complete set of instructions. This option does have more moving parts and more latency, since you're transferring your whole application to the remote server every time code changes. So if you can, just configuring podman and using remote dev services is probably better.\n\n\nA third option, which probably isn't relevant for you, is to use [Testcontainers Cloud](https:\/\/testcontainers.com\/cloud\/). Quarkus dev services use Testcontainers under the covers, and Testcontainers Cloud is a convenient way of running Testcontainers remotely."}
{"questionId":"1bde12f450e3dde214abd14843a68fe14a561083aa5e4cf112ef6988bd08f112","question":"Boolean addition in R data frame produces a boolean instead of an integer\nIf I try to create a new column in an R dataframe by adding 3 boolean expressions in one step, it results in a boolean rather than an integer. If I use an intermediate step to first create columns for the 3 boolean expressions, I can add them up and get an integer. I don't understand why the two sets of code produce different results.\n\n\n\n```\n#The input is a dataframe with 3 variables that are sometimes missing\n#and sometimes not.\nsubjid <- c(101,102,103,104,105,106,107,108)\nvar1 <- c(1,2,3,4,NaN,NaN,NaN,NaN)\nvar2 <- c(1,2,NaN,NaN,5,6,NaN,NaN)\nvar3 <- c(1,NaN,3,NaN,5,NaN,7,NaN)\ndf <- data.frame(subjid, var1, var2, var3)\ndf\n\n```\n\n\n```\nsubjid var1 var2 var3\n1 101 1 1 1\n2 102 2 2 NaN\n3 103 3 NaN 3\n4 104 4 NaN NaN\n5 105 NaN 5 5\n6 106 NaN 6 NaN\n7 107 NaN NaN 7\n8 108 NaN NaN NaN\n\n```\n\n\n```\n#This code was intended to count how many of the 3 variables were nonmissing\n#But it produces an unexpected result\ndf$nonmissing_count_a <- !is.na(df$var1) + !is.na(df$var2) + !is.na(df$var3)\ntable(df$nonmissing_count_a)\n\n```\n\n\n```\nFALSE TRUE\n5 3\n\n```\n\n\n```\n#This code is intended to obtain the same count of nonmissing variables\n#And it works as expected\ndf$var1_nonmissing <- !is.na(df$var1)\ndf$var2_nonmissing <- !is.na(df$var2)\ndf$var3_nonmissing <- !is.na(df$var3)\ndf$nonmissing_count_b <- df$var1_nonmissing + df$var2_nonmissing + df$var3_nonmissing\ntable(df$nonmissing_count_b)\n\n```\n\n\n```\n0 1 2 3\n1 3 3 1","questionMetadata":{"type":"debugging","tag":"r","level":"intermediate"},"answer":"It happens because of *operator precedence* (see `?Syntax`), try\n\n\n\n```\ntable((!is.na(df$var1)) + (!is.na(df$var2)) + (!is.na(df$var3)))\n\n0 1 2 3 \n1 3 3 1\n\n```\n\nThe addition `+` has higher precedence than negation `!`\n\n\nKeep in mind that you're actually expecting output from counting or adding 1s and 0s (numeric)\n\n\n\n```\ntable(as.numeric(!is.na(df$var1)) + \n as.numeric(!is.na(df$var2)) + \n as.numeric(!is.na(df$var3)))\n\n0 1 2 3 \n1 3 3 1\n\n```\n\nAlternatively try `rowSums`\n\n\n\n```\ntable(rowSums(!is.na(df[,-1])))\n\n0 1 2 3 \n1 3 3 1"}
{"questionId":"63da4b88ac314cde892c2ae6373797ec41aedc73870587cbfc61d12115795220","question":"HttpClientTestingModule is deprecated, how to replace it?\nAfter upgrading my application to Angular 18.0.4, my test classes say:\n\n\n`'HttpClientTestingModule' is deprecated. Add provideHttpClientTesting() to your providers instead.`\n\n\nTherefore I adapted my code as follows:\n\n\n\n```\n await TestBed.configureTestingModule(\n {\n imports: [\n AssetDetailsComponent,\n ],\n providers: [\n \/\/ replacement for HttpClientTestingModule:\n provideHttpClientTesting() \n ]\n })\n .compileComponents();\n\n```\n\nHowever, when I run the tests, I get the following error:\n\n\n\n```\nNullInjectorError: R3InjectorError(Standalone[AssetDetailsComponent])[InventoryActionService -> InventoryActionService -> _HttpClient -> _HttpClient]:\n NullInjectorError: No provider for _HttpClient!\n\n```\n\nIf I use `provideHttpClient()` instead of `provideHttpClientTesting()` it works, yet I doubt that this is best practice. What is the correct solution to this issue?","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"Also add `provideHttpClient()` before `provideHttpClientTesting()`\n\n\n\n```\nproviders: [\n provideHttpClient(),\n provideHttpClientTesting() \n]\n\n```\n\nAs mentioned in [the docs](https:\/\/angular.dev\/guide\/http\/testing)."}
{"questionId":"53f3b6ffff08989b4352d12982adad2d511457f2abf98afdeba62b9a2cc28f47","question":"Dynamic name with glue in mutate call\nI want to create a function that takes up as the first argument the name of a data set, and as a second argument, part of a column's name from the dataframe. I then want to use `glue` to dynamically construct the column name, in the function, and use that column in a `mutate` call, like so:\n\n\n\n```\nlibrary(tidyverse)\n\ntmp <- \n function(data, type){\n var <- glue::glue(\"Sepal.{type}\")\n iris |> \n select({{var}}) |> \n mutate(\"{var}\" := mean({{var}}))\n}\n\n```\n\nI've tried a lot of things, but I struggle to find a solution where the column is called both for the name of the new column (here, `\"{var}\"`) and for the computation of the new column (here, `mean({{var}})`). What should one do in such cases?\n\n\nHere, calling `tmp(iris, \"Length\")` should return a `150x1` data.frame with the mean value in all rows.\n\n\n`tidyverse` solution are preferred, or any pipe-based answers.","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"You can use `mean({{var}})` if you modify your code just a little bit, for example, using `as.symbol` (or `as.name`) to define `var`, instead of a `glue` char\n\n\n\n```\ntmp <- function(data, type) {\n var <- as.symbol(glue::glue(\"Sepal.{type}\"))\n data |>\n select(var) |>\n mutate(\"{var}\" := mean({{ var }}))\n}\n\n```\n\n\n\n---\n\n\nFor some alternatives, I guess you can try `get(var)` or `!!rlang::syms(var)`, for example\n\n\n\n```\ntmp <- function(data, type) {\n var <- glue::glue(\"Sepal.{type}\")\n data |>\n select({{ var }}) |>\n mutate(\"{var}\" := mean(get(var)))\n}\n\n```\n\nor\n\n\n\n```\ntmp <- function(data, type) {\n var <- rlang::sym(glue::glue(\"Sepal.{type}\"))\n data |>\n select(var) |>\n mutate(\"{var}\" := mean(!!var))\n}"}
{"questionId":"0e65f2c1d58ef1264707f2e717f1989775f4ad8a9817b9b61775331feeeeb3e1","question":"g++ optimizes away check for INT\\_MIN in release build\nI encountered a problem where g++ optimized out something it should not have. I reduced the problem to the following example:\nI have a static lib with a function `bool my_magic_function(int* x)`, which decrements `x` by 1 if it can, otherwise (`x == INT_MIN`), it returns `false` and does not touch the original value.\nIf I use the function in a debug build, then it works as expected. But in release build the check is optimized away. Platform:\n\n\nOn RHEL 9.3 with g++ (GCC) 11.4.1 20230605 -> Problem present\n\n\nUbuntu 22.04 g++ 11.4.0 g++ or 10.5.0 g++ -> Problem present\n\n\nUbuntu 22.04 g++ 9.5.0 -> Code works as expected in release too.\n\n\nHere is a minimal example with a static lib and a simple main.cpp using the function:\n\n\nalmalib.h:\n\n\n\n```\nbool my_magic_function(int* x); \n\n```\n\nalmalib.cc:\n\n\n\n```\n#include \"almalib.h\"\n#include <cstring>\n#include <limits>\n\nbool my_magic_function(int* x) {\n int cp_new;\n \/\/ integer overflow is undefined, so lets make sure it becomes int max\n if (*x == std::numeric_limits<int>::lowest()) {\n cp_new = std::numeric_limits<int>::max(); \n } else {\n cp_new = *x - 1;\n } \n if (cp_new < *x) {\n *x = cp_new;\n return true; \n }\n return false;\n} \n\n```\n\nmain.cpp\n\n\n\n```\n#include \"almalib.h\"\n#include <iostream>\n#include <limits>\n\nint main()\n{\n for (int x : {0, std::numeric_limits<int>::lowest()})\n {\n int x2 = x;\n std::cerr << \"Res for \" << x << \" \" << (my_magic_function(&x2) ? \"OK\" : \"NOT_OK\") << \" val: \" << x2 << std::endl;\n }\n}\n\n```\n\nCompile:\n\n\n\n```\ng++ -c almalib.cc -o almalib.o\nar crf libalma.a almalib.o\ng++ main.cpp -o run -L. -lalma\n\ng++ -c almalib.cc -O3 -o almalibR.o\nar crf libalmaR.a almalibR.o\ng++ main.cpp -O3 -o runR -L. -lalmaR\n\n```\n\noutout for Debug (.\/run):\n\n\n\n```\nRes for 0 OK val: -1\nRes for -2147483648 NOT_OK val: -2147483648\n\n```\n\noutput for Release (.\/runR):\n\n\n\n```\nRes for 0 OK val: -1\nRes for -2147483648 OK val: 2147483647\n\n```\n\ngoing through the generated assembly with gdb, `my_magic_function` is reduced to 3 lines:\n\n\n\n```\n0x401320 <_Z17my_magic_functionPi> subl $0x1,(%rdi) \n0x401323 <_Z17my_magic_functionPi+3> mov $0x1,%eax \n0x401328 <_Z17my_magic_functionPi+8> ret \n\n```\n\nMy questions are:\n\n\n- Is this a known issue?\n- What are my options to prevent it from happening? (I can trivially rewrite the example function, but not the original problem). Are there any compiler hints, or should I disable a certain optimization type?","questionMetadata":{"type":"debugging","tag":"c++","level":"advanced"},"answer":"These can be expensive, but `-fwrapv` and `-ftrapv` both make your problem evaporate.\n\n\n`-fwrapv` means that the compiler assumes signed integers act like unsigned integers and wrap around. This is what your hardware almost certainly does. `-ftrapv` means it adds traps (exceptions) for when signed integers wrap around (you can probably set flags on your hardware to get this to happen, if not it will add in logic to catch it).\n\n\nWith either flag, your code acts correctly.\n\n\nWhile `-fwrapv` seems harmless, what it means is that a bunch of optimizations in loops and comparisons cannot be done.\n\n\nWithout `-fwrapv`, the compiler can assume `a+b` with both greater than 0 is greater than `a` and greater than `b`. With it, it cannot.\n\n\nAs a guess, your compiler is first taking the early branch code\n\n\n\n```\nif (*x == std::numeric_limits<int>::lowest()) {\n cp_new = std::numeric_limits<int>::max(); \n} else {\n cp_new = *x - 1;\n}\n\n```\n\nand saying \"on the hardware target, this is equivalent to\"\n\n\n\n```\ncp_new = *x - 1;\n\n```\n\nbecause it knows the hardware target has signed underflow that wraps around. Significant optimization, eliminates a needless branch!\n\n\nIt then looks at\n\n\n\n```\nif (cp_new < *x) {\n *x = cp_new;\n return true; \n}\n\n```\n\nthen replaces cp\\_new:\n\n\n\n```\nif ((*x - 1)< *x) {\n *x = (*x - 1);\n return true; \n}\n\n```\n\nand reasons \"well, signed underflow is undefined behavior, so something minus 1 is always less than something\". Thus optimizing it into:\n\n\n\n```\n*x = *x-1;\nreturn true; \n\n```\n\nthe error being that it used `cp_new = *x - 1` in a context where underflow is *defined* and *wraps around* first, then reused it without allowing for the wrap around case.\n\n\nBy making underflow cause a trap *or* making it assumed to be true, we block the assumptions that let it do the 2nd false optimization.\n\n\nBut this story - why `fwrapv`\/`ftrapv` work - is a \"just so story\", it is not informed by actually reading the gcc code or bug reports; it is a guess I made as to the cause of the bug, which led to the idea of messing with the overflow settings, which did fix your symptoms. Consider it a fairy tale explaining why `-fwrapv` fixes your bug."}
{"questionId":"b7de9f9f38539268c323d83380b53d3256d6a6784d8d260bb9bad664d8c16867","question":"Filtering the Results of Expand.Grid\n**I am trying to generate a list of all combinations numbers that satisfy all the following conditions:**\n\n\n- Any combination is exactly 6 numbers long\n- The possible numbers are only 1,5,7\n- 1 can only be followed by either 1 or 5\n- 5 can only be followed by either 5 or 7\n- 7 can only be followed by 7\n- There must be at least two 1's\n\n\nI tried to do this with the expand.grid function.\n\n\n**Step 1:** First, I generated a list of all 6 length combinations with 1,5,7:\n\n\n\n```\nnumbers <- c(1, 5, 7)\nall_combinations <- data.frame(expand.grid(rep(list(numbers), 6)))\n\n```\n\n**Step 2:** Then, I tried to add variables to satisfy the conditions:\n\n\n\n```\nall_combinations$starts_with_1 <- ifelse(all_combinations$Var1 == 1, \"yes\", \"no\")\nall_combinations$numbers_ascending <- apply(all_combinations, 1, function(x) all(diff(as.numeric(x)) >= 0))\n\n\nall_combinations$numbers_ascending <- ifelse(all_combinations$numbers_ascending , \"yes\", \"no\")\n\n\nall_combinations$at_least_two_ones <- apply(all_combinations, 1, function(x) sum(x == 1) >= 2)\n\nall_combinations$at_least_two_ones <- ifelse(all_combinations$at_least_two_ones, \"yes\", \"no\")\n\n```\n\n**Step 3:** Finally, I tried to keep rows that satisfy all 3 conditions:\n\n\n\n```\nall_combinations <- all_combinations[all_combinations$starts_with_1 == \"yes\" & all_combinations$numbers_ascending == \"yes\" & all_combinations$at_least_two_ones == \"yes\", ]\n\nall_combinations\n\n```\n\nHowever, the results are all NA:\n\n\n\n```\n Var1 Var2 Var3 Var4 Var5 Var6 starts_with_1 numbers_ascending at_least_two_ones\nNA NA NA NA NA NA NA <NA> <NA> <NA>\nNA.1 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.2 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.3 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.4 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.5 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.6 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.7 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.8 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.9 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.10 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.11 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.12 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.13 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.14 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.15 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.16 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.17 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.18 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.19 NA NA NA NA NA NA <NA> <NA> <NA>\nNA.20 NA NA NA NA NA NA <NA> <NA> <NA>\n\n```\n\n**Note**: I am trying to do this in a flexible way so that if I need to change something (e.g. modify to at least three 1's, or modify to 7 appearing before 5), I can quickly create a variable to test for this condition. This is why I am using the expand.grid approach.","questionMetadata":{"type":"implementation","tag":"r","level":"beginner"},"answer":"I guess we'll be adjusting it, but how about a `regex` approach? \n\nCheck this out:\n\n\n\n```\nlibrary(tidyverse)\n\n# ----------------\nmy_numbers <- c(1, 5, 7)\nmy_combinations <- data.frame(expand.grid(rep(list(my_numbers), 6)))\n\n# Patterns\nlooking <- str_c(\n sep = \"|\",\n \"1{2}\") # At least two \"1\"\n\nnot_looking <- str_c(\n sep = \"|\",\n \"17\", # 1 can only be followed by either 1 or 5\n \"51\", # 5 can only be followed by either 5 or 7\n \"71\", \"75\") # 7 can only be followed by 7\n\n# ----------------\nmy_output <- my_combinations %>% \n rowwise() %>% \n mutate(combo = str_flatten(c_across(starts_with(\"var\")))) %>% \n filter(str_detect(combo, looking), !str_detect(combo, not_looking))\n\n```\n\nThe output:\n\n\n\n```\n> my_output\n# A tibble: 11 \u00d7 7\n# Rowwise: \n Var1 Var2 Var3 Var4 Var5 Var6 combo \n <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr> \n 1 1 1 1 1 1 1 111111\n 2 1 1 1 1 1 5 111115\n 3 1 1 1 1 5 5 111155\n 4 1 1 1 5 5 5 111555\n 5 1 1 5 5 5 5 115555\n 6 1 1 1 1 5 7 111157\n 7 1 1 1 5 5 7 111557\n 8 1 1 5 5 5 7 115557\n 9 1 1 1 5 7 7 111577\n10 1 1 5 5 7 7 115577\n11 1 1 5 7 7 7 115777\n\n```\n\nCreated on 2024-05-01 with [reprex v2.1.0](https:\/\/reprex.tidyverse.org)"}
{"questionId":"f2654741e84b645cd98f539119d4495193a90b6adf59e2e770f12ef105f6d26d","question":"Is it possible to create a perfect pass-through function?\nI want to create a function `f` so that the expression `f(x)` has always exactly the same semantics as the expression `x`.\n\n\nA first naive attempt would be:\n\n\n\n```\ntemplate<typename T>\nauto f(T&& x) -> decltype(auto)\n{\n return x;\n}\n\n```\n\nwhich works fine when `x` is an `lvalue` or an `xvalue`, but when `x` is a `prvalue` it gets transformed into an `xvalue`, which means it will behave differently in some situations.\n\n\nIn particular,\n\n\n\n```\nauto &&z = std::string();\n\n```\n\nworks fine, z is a reference to a life-time extended string object, while\n\n\n\n```\nauto &&z = f(std::string());\n\n```\n\ncreates a dangling reference, because `f(std::string())` is not a prvalue, so it can't be lifetime-extended.\n\n\nOther approaches I've tried either have the same problem or create extra temporaries, which of course is not the \"same semantics\" I'm looking for.\n\n\nIs there any way to create a function that is a no-op and perfect forwards all value categories?","questionMetadata":{"type":"conceptual","tag":"c++","level":"advanced"},"answer":"No, it is not possible, reusing your example\n\n\n\n```\nauto &&z = f(std::string());\n\n```\n\nif return type is\n\n\n- `std::string`, a temporary is created (even if it would just be a move construction).\n- `std::string&&`, you lose life-time extension\n- a custom wrapper (with `operator string()` and so on), it is no longer a `string`."}
{"questionId":"8a7305b9265bcbd974e06ce6a07e59d199255c66114fd32ff14cf33bf2eb5ef2","question":"KMM project build error - 'testClasses' not found in project ':shared'\nI'm trying out KMM for the first time and when I try to rebuild the project, I get this error. Not sure what this task does. Am I the only one getting this error?\n\n\nI'm using a Macbook Pro M1 that runs\n\n\nAndroid Studio Iguana | 2023.2.1\nRuntime version: 17.0.9+0-17.0.9b1087.7-11185874 aarch64\nVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.\n\n\n\n```\nExecuting tasks: [:androidApp:clean, :shared:clean, \n:androidApp:assembleDebug, :androidApp:assembleDebugUnitTest, \n:androidApp:assembleDebugAndroidTest, :shared:assembleDebug, \n:shared:assembleDebugUnitTest, :shared:assembleDebugAndroidTest, \n:shared:assemble, :shared:testClasses] in project \n\/Users\/betteropinions\/Development\/KMM\/TranslateKMMApp\n\nCalculating task graph as no configuration cache is available for \ntasks: :androidApp:clean :shared:clean :androidApp:assembleDebug \n:androidApp:assembleDebugUnitTest \n:androidApp:assembleDebugAndroidTest :shared:assembleDebug \n:shared:assembleDebugUnitTest :shared:assembleDebugAndroidTest \n:shared:assemble :shared:testClasses\nType-safe project accessors is an incubating feature.\n\nFAILURE: Build failed with an exception.\n\n* What went wrong:\nCannot locate tasks that match ':shared:testClasses' as task \n'testClasses' not found in project ':shared'.\n\n* Try: \n> Run gradle tasks to get a list of available tasks.\n> For more on name expansion, please refer to \nhttps:\/\/docs.gradle.org\/8.4\/userguide\/\ncommand_line_interface.html#sec: \nname_abbreviation in the Gradle documentation.\n> Run with --stacktrace option to get the stack trace.\n> Run with --info or --debug option to get more log output.\n> Run with --scan to get full insights.\n> Get more help at https:\/\/help.gradle.org.\n\nBUILD FAILED in 1s\nConfiguration cache entry stored.\n\n```\n\nBelow is the build.gradle.kts for shared module\n\n\n\n```\n\nplugins {\n alias(libs.plugins.kotlinMultiplatform)\n alias(libs.plugins.kotlinCocoapods)\n alias(libs.plugins.androidLibrary)\n alias(libs.plugins.kotlinSerialization)\n id(\"com.squareup.sqldelight\")\n}\n\nkotlin {\n androidTarget {\n compilations.all {\n kotlinOptions {\n jvmTarget = \"1.8\"\n }\n }\n }\n\n iosX64()\n iosArm64()\n iosSimulatorArm64()\n\n cocoapods {\n summary = \"Some description for the Shared Module\"\n homepage = \"Link to the Shared Module homepage\"\n version = \"1.0\"\n ios.deploymentTarget = \"16.0\"\n podfile = project.file(\"..\/iosApp\/Podfile\")\n framework {\n baseName = \"shared\"\n isStatic = true\n }\n }\n \n sourceSets {\n\n commonMain.dependencies {\n \/\/put your multiplatform dependencies here\n implementation(libs.ktor.client.core)\n implementation(libs.ktor.client.content.negotiation)\n implementation(libs.ktor.serialization.kotlinx.json)\n implementation(libs.sqldelight.coroutines.extensions)\n implementation(libs.sqldelight.runtime)\n implementation(libs.kotlinx.datetime)\n }\n\n commonTest.dependencies {\n implementation(kotlin(\"test\"))\n implementation(libs.turbine)\n implementation(libs.assertk)\n }\n\n androidMain.dependencies {\n implementation(libs.ktor.client.android)\n implementation(libs.sqldelight.android.driver)\n }\n\n iosMain.dependencies {\n implementation(libs.ktor.client.darwin)\n implementation(libs.sqldelight.native.driver)\n }\n }\n}\n\nandroid {\n namespace = \"com.example.mytranslate\"\n compileSdk = 34\n defaultConfig {\n minSdk = 24\n }\n compileOptions {\n sourceCompatibility = JavaVersion.VERSION_1_8\n targetCompatibility = JavaVersion.VERSION_1_8\n }\n}\n\nsqldelight {\n database(\"TranslateDatabase\") {\n packageName = \"com.example.mytranslate.database\"\n sourceFolders = listOf(\"sqldelight\")\n }\n}","questionMetadata":{"type":"debugging","tag":"kotlin","level":"intermediate"},"answer":"This solution <https:\/\/stackoverflow.com\/a\/78017056\/6512100> works for me.\n\n\nI've added `task(\"testClasses\")` within.in `kotlin { }` block in `build.gradle.kts (:shared)` file.\n\n\n\n```\nplugins {\n ...\n}\n\nkotlin {\n androidTarget {\n ...\n }\n\n sourceSets {\n ...\n }\n\n task(\"testClasses\")\n}\n\nandroid {\n ...\n}"}
{"questionId":"7106a6920f9b0b6aa8bcb6cbf26f3895a0122a314f567063fc31805e337c6656","question":"Play Console Warning: Update your Play Core Maven dependency to an Android 14 compatible version\nReceived a warning in the Play Console saying the following\n\n\n\n> \n> Update your Play Core Maven dependency to an Android 14 compatible version! Your current Play Core library is incompatible with targetSdkVersion 34 (Android 14), which introduces a backwards-incompatible change to broadcast receivers to improve user security. As a reminder, from August 31, Google Play requires all new app releases to target Android 14. Update to the latest Play Core library version dependency to avoid app crashes\n> \n> \n> \n\n\nI would like to know, what can I possibly do to solve this problem?\n\n\nMy app is a complete native app, using Java\/Kotlin and XML\/Jetpack Compose","questionMetadata":{"type":"version","tag":"java","level":"intermediate"},"answer":"First and foremost, it's important to see that they've included a [Migration link in the warning](https:\/\/developer.android.com\/guide\/playcore#playcore-migration). So we need to migrate the `Tasks` class as per that\n\n\nLater, we can split the monolithic `play-core` SDK into the required dependencies, as per the app requirement. The alternative SDKs can be found [here](https:\/\/developer.android.com\/reference\/com\/google\/android\/play\/core\/release-notes#partitioned-apis)\n\n\nIn my case, had to get the In-App Review and In-App updates SDK, seperately\n\n\nEDIT:\n\n\nFor hybrid apps, we can go the `android` folder\n\n\n\n```\ncd android\n\n```\n\nAnd run the following command to check all the dependencies listed in the app, and check which one has the `com.google.android.play:core` dependency, by exporting all the dependencies to a text file `dependencies.txt`\n\n\n\n```\n.\/gradlew app:dependencies > dependencies.txt\n\n```\n\nOnce you figure out, which of your dependencies uses the play core library, you can actually update it"}
{"questionId":"bd9b4dfa0800ccecdb4be73320afec9c0beb8c3ed4a12c490a88372ab321b410","question":"Why is JavaScript executing callbacks in a for-loop so fast the first time?\nThe following code with `callback` argument runs faster in the first loop.\n\n\n\n\n\n```\nconst fn = (length, label, callback) => {\n console.time(label);\n for (let i = 0; i < length; i++) {\n callback && callback(i);\n }\n console.timeEnd(label);\n};\n\nconst length = 100000000;\nfn(length, \"1\", () => {}) \/\/ very few intervals\nfn(length, \"2\", () => {}) \/\/ regular\nfn(length, \"3\", () => {}) \/\/ regular\n```\n\n\n\n\n\n\nand then I removed the third argument `callback`, and their execution times are very near:\n\n\n\n\n\n```\nconst fn = (length, label, callback) => {\n console.time(label);\n for (let i = 0; i < length; i++) {\n callback && callback(i);\n }\n console.timeEnd(label);\n};\n\nconst length = 100000000;\nfn(length, \"1\") \/\/ regular\nfn(length, \"2\") \/\/ regular\nfn(length, \"3\") \/\/ regular\n```\n\n\n\n\n\n\nWhy?","questionMetadata":{"type":"conceptual","tag":"javascript","level":"intermediate"},"answer":"In short: it's due to inlining.\n\n\nWhen a call such as `callback()` has seen only one target function being called, and the containing function (\"`fn`\" in this case) is optimized, then the optimizing compiler will (usually) decide to inline that call target. So in the fast version, no actual call is performed, instead the empty function is inlined. \n\nWhen you then call different callbacks, the old optimized code needs to be thrown away (\"deoptimized\"), because it is now incorrect (if the new callback has different behavior), and upon re-optimization a little while later, the inlining heuristic decides that inlining multiple possible targets probably isn't worth the cost (because inlining, while sometimes enabling great performance benefits, also has certain costs), so it doesn't inline anything. Instead, generated optimized code will now perform actual calls, and you'll see the cost of that.\n\n\nAs @0stone0 observed, when you pass *the same* callback on the second call to `fn`, then deoptimization isn't necessary, so the originally generated optimized code (that inlined this callback) can continue to be used. Defining three different callbacks all with the same (empty) source code doesn't count as \"the same callback\".\n\n\nFWIW, this effect is most pronounced in microbenchmarks; though sometimes it's also visible in more real-world-ish code. It's certainly a common trap for microbenchmarks to fall into and produce confusing\/misleading results.\n\n\nIn the second experiment, when there is no `callback`, then of course the `callback &&` part of the expression will already bail out, and none of the three calls to `fn` will call (or inline) any callbacks, because there are no callbacks."}
{"questionId":"4abd1d17cbf6a95b6c1dafacb9d453b777dd5bdd68980e2ef613d61a3d9913b9","question":"how to define in C++20 a concept to check if a type matches any of the types in a type-list\nI want to define a concept in C++ (<= C++20) to check if a type matches any of the types define in a type-list struct.\n\n\nThe following is my attempt so far:\n\n\n\n```\ntemplate<typename... Types>\nstruct TypeList {};\n\nusing SupportedTypes = TypeList<int, bool, float, long>;\n \ntemplate<typename T, typename... Types>\nconcept IsAnyOf = (std::is_same_v<T, Types> || ...);\n \nstatic_assert(IsAnyOf<bool, SupportedTypes>);\n\n```\n\nI have also tried using:\n\n\n\n```\ntemplate<typename T, typename... Types>\nconcept IsAnyOf = std::disjunction_v<std::is_same<T, Types>...>;\n\n```\n\nBut my static assertion fails:\n\n\n\n```\nStatic assertion failed\nbecause 'IsSupportedType<_Bool, SupportedTypes>' evaluated to false\nbecause 'std::is_same_v<_Bool, meta::TypeList<int, _Bool, float, long> >' evaluated to false\n\n```\n\nI understand it probably has to do with the fact that I'm passing `SupportedTypes` to the concept without properly unpacking the types inside it, and hence in the static assertion I'm checking if bool is the same as `SupportedTypes`, as opposed to checking if bool is the same as any of the types inside `SupportedTypes`; but I can't get it to work nonetheless.","questionMetadata":{"type":"version","tag":"c++","level":"advanced"},"answer":"> \n> how to define in C++20 a concept to check if a type matches any of the types in a type-list\n> \n> \n> \n\n\n#### Method 1\n\n\nYou can change the program to as shown below. This is more readable than method 2.\n\n\n\n```\ntemplate<typename... Types>\nstruct TypeList {};\n\ntemplate <typename T, typename List>\nconcept IsAnyOf = []<typename... Types>(TypeList<Types...>) \n{\n return (std::is_same_v<Types, T> || ...);\n}(List());\n\n```\n\n[Working demo](https:\/\/godbolt.org\/z\/rqzPaqWex)\n\n\n\n\n---\n\n\nThis is how you would use it:\n\n\n\n```\nusing SupportedTypes = TypeList<double, float, std::string, bool>;\n\nint main() \n{\n std::cout << IsAnyOf<bool, SupportedTypes>; \/\/true\n std::cout << IsAnyOf<char, SupportedTypes>; \/\/false\n}\n\n```\n\n\n\n---\n\n\n#### Method 2\n\n\nNote there are also other ways to do this like using `std::tuple` and `std::inxex_sequence` as shown in [this alternative](https:\/\/godbolt.org\/z\/5fWqvo3nP).\n\n\n\n```\ntemplate <typename T, typename Typelist>\nconcept is_any_of = []<std::size_t... Indices>(std::index_sequence<Indices...>) \n{\n return (std::is_same_v<std::tuple_element_t<Indices, Typelist>, T> || ...);\n}(std::make_index_sequence<std::tuple_size_v<Typelist>>());"}
{"questionId":"81fbbd2d81f97ef3c8478516820bff2b67df9d0098272722bb8af509f1d34a09","question":"Characters printed differently in R.app\/RStudio\/reprex\nWith R 4.4.0 on a MacBook, nothing locale() or encoding related in .Rprofile or .Renviron.\n`Sys.getlocale()` on a fresh session returns `\"en_US.UTF-8\/en_US.UTF-8\/en_US.UTF-8\/C\/en_US.UTF-8\/en_US.UTF-8\"` in both the native R console, or RStudio.\n\n\n`KOI8-R` is a Cyrillic encoding that uses one byte per character. When using reprex from R studio (this is my output, which conforms to my expectations.\n\n\nNote: this is using the reprex addin, which is running `reprex::reprex()`, itself using as default input code from the paste bin.\n\n\n\n```\nch256 <- sapply(0:255, function(x) rawToChar(as.raw(x)))\nSys.setlocale(\"LC_CTYPE\", \"ru_RU.KOI8-R\")\n#> [1] \"ru_RU.KOI8-R\"\nch256\n#> [1] \"\" \"\\001\" \"\\002\" \"\\003\" \"\\004\" \"\\005\" \"\\006\" \"\\a\" \"\\b\" \"\\t\" \n#> [11] \"\\n\" \"\\v\" \"\\f\" \"\\r\" \"\\016\" \"\\017\" \"\\020\" \"\\021\" \"\\022\" \"\\023\"\n#> [21] \"\\024\" \"\\025\" \"\\026\" \"\\027\" \"\\030\" \"\\031\" \"\\032\" \"\\033\" \"\\034\" \"\\035\"\n#> [31] \"\\036\" \"\\037\" \" \" \"!\" \"\\\"\" \"#\" \"$\" \"%\" \"&\" \"'\" \n#> [41] \"(\" \")\" \"*\" \"+\" \",\" \"-\" \".\" \"\/\" \"0\" \"1\" \n#> [51] \"2\" \"3\" \"4\" \"5\" \"6\" \"7\" \"8\" \"9\" \":\" \";\" \n#> [61] \"<\" \"=\" \">\" \"?\" \"@\" \"A\" \"B\" \"C\" \"D\" \"E\" \n#> [71] \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \n#> [81] \"P\" \"Q\" \"R\" \"S\" \"T\" \"U\" \"V\" \"W\" \"X\" \"Y\" \n#> [91] \"Z\" \"[\" \"\\\\\" \"]\" \"^\" \"_\" \"`\" \"a\" \"b\" \"c\" \n#> [101] \"d\" \"e\" \"f\" \"g\" \"h\" \"i\" \"j\" \"k\" \"l\" \"m\" \n#> [111] \"n\" \"o\" \"p\" \"q\" \"r\" \"s\" \"t\" \"u\" \"v\" \"w\" \n#> [121] \"x\" \"y\" \"z\" \"{\" \"|\" \"}\" \"~\" \"\\177\" \"\u2500\" \"\u2502\"\n#> [131] \"\u250c\" \"\u2510\" \"\u2514\" \"\u2518\" \"\u251c\" \"\u2524\" \"\u252c\" \"\u2534\" \"\u253c\" \"\u2580\"\n#> [141] \"\u2584\" \"\u2588\" \"\u258c\" \"\u2590\" \"\u2591\" \"\u2592\" \"\u2593\" \"\u2320\" \"\u25a0\" \"\u2219\"\n#> [151] \"\u221a\" \"\u2248\" \"\u2264\" \"\u2265\" \" \" \"\u2321\" \"\u00b0\" \"\u00b2\" \"\u00b7\" \"\u00f7\"\n#> [161] \"\u2550\" \"\u2551\" \"\u2552\" \"\u0451\" \"\u2553\" \"\u2554\" \"\u2555\" \"\u2556\" \"\u2557\" \"\u2558\"\n#> [171] \"\u2559\" \"\u255a\" \"\u255b\" \"\u255c\" \"\u255d\" \"\u255e\" \"\u255f\" \"\u2560\" \"\u2561\" \"\u0401\"\n#> [181] \"\u2562\" \"\u2563\" \"\u2564\" \"\u2565\" \"\u2566\" \"\u2567\" \"\u2568\" \"\u2569\" \"\u256a\" \"\u256b\"\n#> [191] \"\u256c\" \"\u00a9\" \"\u044e\" \"\u0430\" \"\u0431\" \"\u0446\" \"\u0434\" \"\u0435\" \"\u0444\" \"\u0433\"\n#> [201] \"\u0445\" \"\u0438\" \"\u0439\" \"\u043a\" \"\u043b\" \"\u043c\" \"\u043d\" \"\u043e\" \"\u043f\" \"\u044f\"\n#> [211] \"\u0440\" \"\u0441\" \"\u0442\" \"\u0443\" \"\u0436\" \"\u0432\" \"\u044c\" \"\u044b\" \"\u0437\" \"\u0448\"\n#> [221] \"\u044d\" \"\u0449\" \"\u0447\" \"\u044a\" \"\u042e\" \"\u0410\" \"\u0411\" \"\u0426\" \"\u0414\" \"\u0415\"\n#> [231] \"\u0424\" \"\u0413\" \"\u0425\" \"\u0418\" \"\u0419\" \"\u041a\" \"\u041b\" \"\u041c\" \"\u041d\" \"\u041e\"\n#> [241] \"\u041f\" \"\u042f\" \"\u0420\" \"\u0421\" \"\u0422\" \"\u0423\" \"\u0416\" \"\u0412\" \"\u042c\" \"\u042b\"\n#> [251] \"\u0417\" \"\u0428\" \"\u042d\" \"\u0429\" \"\u0427\" \"\u042a\"\n\n```\n\nHowever the same code printed in my RStudio console prints something different (fake reprex from output copy and paste):\n\n\n\n```\nch256 <- sapply(0:255, function(x) rawToChar(as.raw(x)))\nSys.setlocale(\"LC_CTYPE\", \"ru_RU.KOI8-R\")\nch256\n#> [1] \"\" \"\\001\" \"\\002\" \"\\003\" \"\\004\" \"\\005\" \"\\006\" \"\\a\" \"\\b\" \"\\t\" \n#> [11] \"\\n\" \"\\v\" \"\\f\" \"\\r\" \"\\016\" \"\\017\" \"\\020\" \"\\021\" \"\\022\" \"\\023\"\n#> [21] \"\\024\" \"\\025\" \"\\026\" \"\\027\" \"\\030\" \"\\031\" \"\\032\" \"\\033\" \"\\034\" \"\\035\"\n#> [31] \"\\036\" \"\\037\" \" \" \"!\" \"\\\"\" \"#\" \"$\" \"%\" \"&\" \"'\" \n#> [41] \"(\" \")\" \"*\" \"+\" \",\" \"-\" \".\" \"\/\" \"0\" \"1\" \n#> [51] \"2\" \"3\" \"4\" \"5\" \"6\" \"7\" \"8\" \"9\" \":\" \";\" \n#> [61] \"<\" \"=\" \">\" \"?\" \"@\" \"A\" \"B\" \"C\" \"D\" \"E\" \n#> [71] \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \n#> [81] \"P\" \"Q\" \"R\" \"S\" \"T\" \"U\" \"V\" \"W\" \"X\" \"Y\" \n#> [91] \"Z\" \"[\" \"\\\\\" \"]\" \"^\" \"_\" \"`\" \"a\" \"b\" \"c\" \n#> [101] \"d\" \"e\" \"f\" \"g\" \"h\" \"i\" \"j\" \"k\" \"l\" \"m\" \n#> [111] \"n\" \"o\" \"p\" \"q\" \"r\" \"s\" \"t\" \"u\" \"v\" \"w\" \n#> [121] \"x\" \"y\" \"z\" \"{\" \"|\" \"}\" \"~\" \"\\177\" \"\ufffd\" \"\ufffd\"\n#> [131] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [141] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [151] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [161] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [171] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [181] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [191] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [201] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [211] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [221] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [231] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [241] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n#> [251] \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\" \"\ufffd\"\n\n```\n\nIn the R for Mac OS X GUI (R.app) it's different again, the encoding appears to be ignored and latin1 looking characters are printed (fake reprex from output copy and paste):\n\n\n\n```\nch256 <- sapply(0:255, function(x) rawToChar(as.raw(x)))\nSys.setlocale(\"LC_CTYPE\", \"ru_RU.KOI8-R\")\n#> [1] \"ru_RU.KOI8-R\"\nch256\n#> [1] \"\" \"\\001\" \"\\002\" \"\\003\" \"\\004\" \"\\005\" \"\\006\" \"\\a\" \"\\b\" \"\\t\" \n#> [11] \"\\n\" \"\\v\" \"\\f\" \"\\r\" \"\\016\" \"\\017\" \"\\020\" \"\\021\" \"\\022\" \"\\023\"\n#> [21] \"\\024\" \"\\025\" \"\\026\" \"\\027\" \"\\030\" \"\\031\" \"\\032\" \"\\033\" \"\\034\" \"\\035\"\n#> [31] \"\\036\" \"\\037\" \" \" \"!\" \"\\\"\" \"#\" \"$\" \"%\" \"&\" \"'\" \n#> [41] \"(\" \")\" \"*\" \"+\" \",\" \"-\" \".\" \"\/\" \"0\" \"1\" \n#> [51] \"2\" \"3\" \"4\" \"5\" \"6\" \"7\" \"8\" \"9\" \":\" \";\" \n#> [61] \"<\" \"=\" \">\" \"?\" \"@\" \"A\" \"B\" \"C\" \"D\" \"E\" \n#> [71] \"F\" \"G\" \"H\" \"I\" \"J\" \"K\" \"L\" \"M\" \"N\" \"O\" \n#> [81] \"P\" \"Q\" \"R\" \"S\" \"T\" \"U\" \"V\" \"W\" \"X\" \"Y\" \n#> [91] \"Z\" \"[\" \"\\\\\" \"]\" \"^\" \"_\" \"`\" \"a\" \"b\" \"c\" \n#> [101] \"d\" \"e\" \"f\" \"g\" \"h\" \"i\" \"j\" \"k\" \"l\" \"m\" \n#> [111] \"n\" \"o\" \"p\" \"q\" \"r\" \"s\" \"t\" \"u\" \"v\" \"w\" \n#> [121] \"x\" \"y\" \"z\" \"{\" \"|\" \"}\" \"~\" \"\\177\" \"\u00c4\" \"\u00c5\"\n#> [131] \"\u00c7\" \"\u00c9\" \"\u00d1\" \"\u00d6\" \"\u00dc\" \"\u00e1\" \"\u00e0\" \"\u00e2\" \"\u00e4\" \"\u00e3\"\n#> [141] \"\u00e5\" \"\u00e7\" \"\u00e9\" \"\u00e8\" \"\u00ea\" \"\u00eb\" \"\u00ed\" \"\u00ec\" \"\u00ee\" \"\u00ef\"\n#> [151] \"\u00f1\" \"\u00f3\" \"\u00f2\" \"\u00f4\" \"\u00f6\" \"\u00f5\" \"\u00fa\" \"\u00f9\" \"\u00fb\" \"\u00fc\"\n#> [161] \"\u2020\" \"\u00b0\" \"\u00a2\" \"\u00a3\" \"\u00a7\" \"\u2022\" \"\u00b6\" \"\u00df\" \"\u00ae\" \"\ufffd\"\n#> [171] \"\u2122\" \"\u00b4\" \"\u00a8\" \"\u2260\" \"\u00c6\" \"\u00d8\" \"\u221e\" \"\u00b1\" \"\u2264\" \"\u2265\"\n#> [181] \"\u00a5\" \"\u00b5\" \"\u2202\" \"\u2211\" \"\u220f\" \"\u03c0\" \"\u222b\" \"\u00aa\" \"\u00ba\" \"\u03a9\"\n#> [191] \"\u00e6\" \"\u00f8\" \"\u00bf\" \"\u00a1\" \"\u00ac\" \"\u221a\" \"\u0192\" \"\u2248\" \"\u2206\" \"\u00ab\"\n#> [201] \"\u00bb\" \"\u2026\" \"\u00a0\" \"\u00c0\" \"\u00c3\" \"\u00d5\" \"\u0152\" \"\u0153\" \"\u2013\" \"\u2014\"\n#> [211] \"\u201c\" \"\u201d\" \"\u2018\" \"\u2019\" \"\u00f7\" \"\u25ca\" \"\u00ff\" \"\u0178\" \"\u2044\" \"\u20ac\"\n#> [221] \"\u2039\" \"\u203a\" \"\ufb01\" \"\ufb02\" \"\u2021\" \"\u00b7\" \"\u201a\" \"\u201e\" \"\u2030\" \"\u00c2\"\n#> [231] \"\u00ca\" \"\u00c1\" \"\u00cb\" \"\u00c8\" \"\u00cd\" \"\u00ce\" \"\u00cf\" \"\u00cc\" \"\u00d3\" \"\u00d4\"\n#> [241] \"\uf8ff\" \"\u00d2\" \"\u00da\" \"\u00db\" \"\u00d9\" \"\u0131\" \"\u02c6\" \"\u02dc\" \"\u00af\" \"\u02d8\"\n#> [251] \"\u02d9\" \"\u02da\" \"\u00b8\" \"\u02dd\" \"\u02db\" \"\u02c7\"\n\n```\n\nIn fact I can reproduce the above with the ISO8859-1 encoding as well (latin1), the native R console will print those correctly this time like reprex, but the RStudio output will still be wrong.\n\n\nI know that making everything UTF-8 fixes everything, but I really want to understand :\n\n\n- What's happening here?\n- Is it possible to get the correct output everywhere?\n- Is this output different on different systems?","questionMetadata":{"type":"version","tag":"r","level":"intermediate"},"answer":"I'm not a macOS or locale expert by any means, but this issue seems to boil down to the documented limitations of `Sys.setlocale` (a simple wrapper around `setlocale` from the Standard C Library; see `man setlocale`). `help(\"Sys.setlocale\")` says:\n\n\n\n> \n> Attempts to change the character set (by `Sys.setlocale(\"LC_CTYPE\", )`, if that implies a different character set) during a session may not work and are likely to lead to some confusion.\n> \n> \n> \n\n\nIIUC that is because the application embedding R, which handles the output stream, may not be written to honor changes to the character set by the embedded R. So you really need to be reading the documentation of the application embedding R.\n\n\nThe [R for macOS FAQ](https:\/\/cran.r-project.org\/bin\/macosx\/RMacOSX-FAQ.html#Internationalization-of-the-R_002eapp) says:\n\n\n\n> \n> By default **R.APP** uses UTF-8 for newly created documents and for the console. When opening new documents **R.APP** assumes UTF-8 and only if the document violates UTF-8 rules, it will try to fallback to legacy encoding, usually Mac Roman.\n> \n> \n> \n\n\nIndeed, your output from **R.APP** seems consistent with [Mac OS Roman](https:\/\/en.wikipedia.org\/wiki\/Mac_OS_Roman).\n\n\n[This](https:\/\/support.posit.co\/hc\/en-us\/articles\/200532197-Character-Encoding-in-the-RStudio-IDE) Posit Support article says:\n\n\n\n> \n> If you call `Sys.setlocale` with `\"LC_CTYPE\"` or `\"LC_ALL\"` to change the system locale while RStudio is running, you may run into some minor issues as RStudio assumes the system encoding doesn't change.\n> \n> \n> \n\n\nsuggesting that the character set used by the RStudio console is fixed at start up depending on the environment at start up. Well, if we dig around [in the RStudio sources](https:\/\/github.com\/rstudio\/rstudio\/blob\/5ecd209673e476549955cc5c8b2f636269c61712\/src\/cpp\/desktop\/DesktopUtilsMac.mm#L119-L170), we find that it effectively requires UTF-8 even if the environment indicates a different, macOS-supported character set. (And, on my macOS, `locale -a` indicates that KO18-R *is* supported.)\n\n\nThat leaves **Terminal.app**, which I tend to use instead of **R.app** because I tend to want a shell. The encoding there can be set under `Settings > Profiles > Advanced > International`. If that is set to UTF-8, then we see output similar to RStudio. But if that is set to KO18-R, then we see \"expected\" output for bytes 0 through 255. Nice.\n\n\nTo answer some of the remaining questions:\n\n\n### How do you get \"expected\" output under every application?\n\n\nIf you know that the source encoding is KO18-R and that the system encoding is UTF-8, then use `iconv` to translate the strings to the system encoding instead of trying to change the character set to the match the source encoding.\n\n\n\n```\niconv(ch256, from = \"KO18-R\", to = \"UTF-8\")\n\n```\n\nIf you don't know that the system encoding is UTF-8, then you could try using `to = l10n_info()[[\"codeset\"]]`. I'm not sure if that is general or portable, though ...\n\n\n### Why are bytes 128 through 255 rendered as `\"\ufffd\"`?\n\n\nUnder section \"Single-byte locales\", `help(\"print.default\")` says:\n\n\n\n> \n> If a non-printable character is encountered during output, it is represented as one of the ANSI escape sequences (`\\a`, `\\b`, `\\f`, `\\n`, `\\r`, `\\t`, `\\v`, `\\\\` and `\\0`: see Quotes), or failing that as a 3-digit octal code: for example the UK currency pound sign in the C locale (if implemented correctly) is printed as `\\243`. Which characters are non-printable depends on the locale.\n> \n> \n> \n\n\nUnder section \"Unicode and other multi-byte locales\", it says:\n\n\n\n> \n> It is possible to have a character string in a character vector that is not valid in the current locale. If a byte is encountered that is not part of a valid character it is printed in hex in the form `\\xab` and this is repeated until the start of a valid character. (This will rapidly recover from minor errors in UTF-8.)\n> \n> \n> \n\n\nYou told R to use a single-byte encoding, namely KO18-R. In that encoding, bytes 128 through 255 are printable characters, so `print.default` does not attempt to format them as octal `\"\\abc\"`. It leaves the original, single bytes alone. But those bytes do not represent valid characters in the UTF-8 encoding used by the application embedding R, so they are ultimately rendered as the standard, multi-byte [replacement character](https:\/\/en.wikipedia.org\/wiki\/Specials_(Unicode_block)#Replacement_character) `\"\ufffd\"`. You do *not* see the hex `\"\\xab\"` because (again) R thinks that you are using a single-byte encoding. It has no way of knowing that the application embedding R is actually using a multi-byte encoding, where `\"\\xab\"` would be more informative than `\"\ufffd\"`.\n\n\n### Why does `reprex` produce \"expected\" output?\n\n\nI don't really know. **reprex** uses **rmarkdown** to render output and **rmarkdown** seems to use UTF-8 unconditionally. My guess is that somewhere in the `reprex` call stack the output containing bytes 128 through 255 is translated from KO18-R to UTF-8. But how would **rmarkdown** know to translate from KO18-R? Does it somehow record the encoding in use before the R subprocess terminates? The messages emitted by this augmented code block are suggestive ...\n\n\n\n```\nreprex::reprex({\n Sys.setlocale(\"LC_CTYPE\", \"ru_RU.KOI8-R\")\n sapply(0:255, function(x) rawToChar(as.raw(x)))\n Sys.setlocale(\"LC_CTYPE\", \"ru_RU.UTF-8\")\n sapply(0:255, function(x) rawToChar(as.raw(x)))\n },\n std_out_err = TRUE)\n\n```\n\n\n```\nQuitting from lines at lines 20-24 [unnamed-chunk-2] (soot-cub_reprex.spin.Rmd)\nError in gsub(\"[\\n]{2,}$\", \"\\n\", x) : input string 1 is invalid\nIn addition: Warning messages:\n1: In grepl(\"^\\\\s*$\", x) :\n unable to translate ' [1] \"\" \"\\001\" \"\\002\" \"\\003\" \"\\004\" \"\\005\" \"\\006\" \"\\a\" \"\\b\" \"\\t\" \n [11] \"\\n\" \"\\v\" \"\\f\" \"\\r\" \"\\016\" \"\\017\" \"\\020\" \"\\021\" \"\\022\" \"\\023\"\n [21] \"\\024\" \"\\025\" \"\\026\" \"\\027\" \"\\030\" \"\\031\" \"\\032\" \"\\033\" \"\\034\" \"\\035\"\n [31] \"\\036\" \"\\037\" \" \" ...' to a wide string\n2: In grepl(\"^\\\\s*$\", x) : input string 1 is invalid\n3: In gsub(\"[\\n]{2,}$\", \"\\n\", x) :\n unable to translate ' [1] \"\" \"\\001\" \"\\002\" \"\\003\" \"\\004\" \"\\005\" \"\\006\" \"\\a\" \"\\b\" \"\\t\" \n [11] \"\\n\" \"\\v\" \"\\f\" \"\\r\" \"\\016\" \"\\017\" \"\\020\" \"\\021\" \"\\022\" \"\\023\"\n [21] \"\\024\" \"\\025\" \"\\026\" \"\\027\" \"\\030\" \"\\031\" \"\\032\" \"\\033\" \"\\034\" \"\\035\"\n [31] \"\\036\" \"\\037\" \" \" ...' to a wide string\n\n```\n\nMaybe one of the functions in the stack should be passing `useBytes = TRUE` to `grep` and friends. Or maybe not. It would be nice to see the traceback ..."}
{"questionId":"15efde83a6e73999e0174ae61de356188a0522d17c0597231396b615e6a9f506","question":"Angular Signals: What's the proper way to trigger a fetch when input Signals change value?\nSo I've been learning and using **Signals** in Angular, and it's exciting. However, there are some use cases where I feel there's some friction. I can't figure out a good pattern when you have a component with `input signals`, and you want to trigger a re-fetch of data whenever some input value changes.\n\n\n`computed` is obviously not the way to go since they can't be async. And `effect`, according to the docs, shouldn't modify component state. So that seems like a no-go as well. And ngOnChanges is being deprecated (long term) in favor of Signals-based components and zoneless.\n\n\nConsider the following component:\n\n\n\n```\n@Component()\nexport class ChartComponent {\n dataSeriesId = input.required<string>();\n fromDate = input.required<Date>();\n toDate = input.required<Date>();\n\n private data = signal<ChartData | null>(null);\n}\n\n```\n\nWhenever one of the input signals gets a new value, I want to trigger a re-fetch of data, and `update` the value of the private `data` signal.\n\n\nHow would one go about this? What's the best practice? Effect and bypass the rule to modify state?","questionMetadata":{"type":"implementation","tag":"typescript","level":"intermediate"},"answer":"Use Angular's `rxjs-interop` to convert the input signals to an `Observable`, then `switchMap` to fetch the results and then convert the result back to a signal, like this:\n\n\n\n```\nimport { toObservable, toSignal } from '@angular\/core\/rxjs-interop';\n\n@Component({\n selector: 'app-chart',\n standalone: true,\n template: `\n {{data()}}\n `,\n})\nexport class ChartComponent {\n dataSeriesId = input.required<string>();\n fromDate = input.required<Date>();\n toDate = input.required<Date>();\n\n params = computed(() => ({\n id: this.dataSeriesId(),\n from: this.fromDate(),\n to: this.toDate(),\n }));\n\n data = toSignal(\n toObservable(this.params).pipe(\n switchMap((params) => this.fetchData(params))\n )\n );\n\n private fetchData({ id, from, to }: { id: string; from: Date; to: Date }) {\n return of(`Example data for id [${id}] from [${from}] to [${to}]`).pipe(\n delay(1000)\n );\n }\n}\n\n```\n\nAny change to any of the inputs will trigger a new fetch.\n\n\nWorking example on [StackBlitz](https:\/\/stackblitz.com\/edit\/stackblitz-starters-epzd98?file=src%2Fmain.ts)"}
{"questionId":"02bc28efe124abd6a12ae9c5cc13ac503b6dc0f4ac1d90a990faeaf77eab67e7","question":"Jest tells me to use act, but then IDE indicates it is deprecated. what is best?\nConfused about whether to use `act` or something else?\n\n\nJest tells me to wrap in `act`:\n\n\n\n> \n> \"console.error, Warning: An update to ContactFormController inside a test was not wrapped in act(...). When testing, code that causes React state updates should be wrapped into act(...)\"\n> \n> \n> \n\n\nIDE tells me `act` is deprecated:\n\n\n\n> \n> \"The signature '(callback: () => void | Promise): Promise' of 'act' is deprecated.ts(6387)\n> index.d.ts(393, 4): The declaration was marked as deprecated here.\"\n> \n> \n> \n\n\nChatGPT tells me that React batches calls so you dont need to wrap in act:\n\n\n\n> \n> \"In React 17 and above, you typically don't need to manually wrap your updates in act calls in your tests. Instead, React will automatically batch updates within test environments like Jest.\"\n> \n> \n> \n\n\nWhen I remove `act` the test fails.\n\n\nWhat should I use as an alternative to `act`?\n\n\nHere are the packages:\n\n\n\n```\n\"react\": \"^18.2.0\", \n\"@testing-library\/jest-dom\": \"^5.16.5\", \n\"@testing-library\/react\": \"^13.4.0\", \n\"@testing-library\/user-event\": \"^13.5.0\", \n\n```\n\nHere is my test below, this works but I get the `act` is deprecated warning, if I remove `act` the test will fail because it did not wait for the component to re-render:\n\n\n\n```\nimport '@testing-library\/jest-dom'\nimport { act, render, screen } from '@testing-library\/react'\nimport userEvent from '@testing-library\/user-event'\n\nimport ContactFormController from '.'\n\ndescribe('Contact Form Controller tests', () => {\n it('should render loader and disable submit button on form submit', async () => {\n const component = render(<ContactFormController \/>)\n\n const submitBtn = component.getByTestId('contact-submit-button')\n\n await act(async () => {\n userEvent.type(component.getByLabelText(\/^First Name\/i), 'Captain')\n userEvent.type(component.getByLabelText(\/^Last Name\/i), 'Bob')\n userEvent.type(component.getByLabelText(\/^Email\/i), '[email protected]')\n userEvent.type(component.getByPlaceholderText(\/^Greetings\/i), 'Captain Ahoy')\n userEvent.click(submitBtn)\n })\n\n expect(component.getByRole('alert')).toBeInTheDocument()\n expect(submitBtn).toBeDisabled()\n })\n})","questionMetadata":{"type":"version","tag":"javascript","level":"intermediate"},"answer":"I'm guessing you're on react >= 18.3.0 and @testing-library\/react <= 15.0.5.\n\n\nYou can try updating to `@testing-library\/[email protected]` which [includes a fix](https:\/\/github.com\/testing-library\/react-testing-library\/releases\/tag\/v15.0.6) to use `act` from either `react-dom\/test-utils` or `react`.\n\n\nAdditionally, you can upgrade to `[email protected]` which includes an [export of `act`](https:\/\/github.com\/facebook\/react\/releases\/tag\/v18.3.1) and install RTL's optional peer dep of `@types\/[email protected]`.\n\n\nNow you should be able to use `act` in your tests without the deprecation warning."}
{"questionId":"cad6cff7791d62a7209a97dc35fa12b21beaef190245d50e9f34db09ec0c4858","question":"Different between Curly braces{} and brackets[] while initializing array in C#?\nI am really curious to know what is the different between these lines?\n\n\n\n```\n\/\/with curly braces\nint[] array2 = { 1, 2, 3, 4, };\n\/\/with brackets\nint[] array3 = [1, 2, 3, 4,];\n\nConsole.WriteLine(array2[1]);\nConsole.WriteLine(array3[1]);\n\n\/\/the output is the same.\n\n```\n\nI want to know what is the different between using curly braces and brackets while initializing values.","questionMetadata":{"type":"conceptual","tag":"c#","level":"intermediate"},"answer":"In the example you've given, they mean the same thing. But [collection expressions](https:\/\/learn.microsoft.com\/en-us\/dotnet\/csharp\/language-reference\/proposals\/csharp-12.0\/collection-expressions) are in general more flexible. In particular:\n\n\n- They can create instances of collections other than arrays\n- They can use the *spread operator* to include sequences\n\n\nFor example:\n\n\n\n```\nImmutableList<int> x = [0, .. Enumerable.Range(100, 5), 200];\n\n```\n\nThat creates an immutable list of integers with values 0, 100, 101, 102, 103, 104, 200.\n\n\nNote that while [*collection initializers*](https:\/\/learn.microsoft.com\/en-us\/dotnet\/csharp\/programming-guide\/classes-and-structs\/object-and-collection-initializers#collection-initializers) can also be used to initialize non-array collection types in a *somewhat* flexible way, they're more limited than collection expressions *and* still require the `new` part. So for example:\n\n\n\n```\n\/\/ Valid\nList<int> x = new() { 1, 2, 3 };\n\n\/\/ Not valid\nList<int> x = { 1, 2, 3 };\n\n\/\/ Not valid (collection initializers assume mutability)\nImmutableList<int> x = new() { 1, 2, 3 };\n\n```\n\nCollection expressions address both of these concerns."}
{"questionId":"8102c886e2e99b8e62f2de69f578f8b3436671d49ee7d5697008635df7188f52","question":"Is there a way to sort a single member variable in a collection of structs using the C++ standard library?\nLet's say I have a vector of a very simple struct:\n\n\n\n```\nstruct SimpleStruct { int a; int b; int c; };\nstd::vector<SimpleStruct> vs;\n\n```\n\nI wish to sort this struct by 'a' leaving the positions of 'b' and 'c' unchanged. Essentially pivoting on `a`, sorting by `a`, and then unpivoting. As an example:\n\n\n\n```\nbefore: {1, 10, 11}, { 5, 100, 111}, {3, 1000, 1111}\nafter: {1, 10, 11}, {3, 100, 111}, {5, 1000, 1111} \/\/'a' is now sorted, 'b' and 'c' relative position unchanged\n\n```\n\nIf I only cared about correctness and wanted to minimize the amount of potential errors, using the standard library, the obvious solution is to create a second collection of type {value, index}, sort by value, and then overwrite the value at the corresponding index.\n\n\nThis is incredibly inefficient, since conceptually all we really need is a standard sorting operation with a custom comparison and a custom swap.\n\n\nIs there a way to do this in C++ using the standard library without creating a custom sort method?\n\n\nC++20 preferred, preferably without the use of Ranges.","questionMetadata":{"type":"version","tag":"c++","level":"intermediate"},"answer":"This can be done easily with the help of C++20 ranges\n\n\n\n```\nstd::vector<SimpleStruct> vs = {{1, 10, 11}, {5, 100, 111}, {3, 1000, 1111}};\nstd::ranges::sort(vs | std::views::transform(&SimpleStruct::a));\n\/\/ vs now becoms {1, 10, 11}, {3, 100, 111}, {5, 1000, 1111}\n\n```\n\n[Demo](https:\/\/godbolt.org\/z\/ofvMhrxo9)\n\n\nNote that `ranges::sort(vs, {}, &SimpleStruct::a)` is incorrect as the projection is only applied to comparison, so it will still sort the entire `SimpleStruct` object rather than `SimpleStruct::a`."}
{"questionId":"09b9591fcfe35dc851241a2f8fbe72a5fea06bdb4daa5c1408a4cb5ff290c175","question":"How do I register a middleware in Laravel 11?\nLaravel 11 does not come with a middleware file and the kernel.php file has been removed altogther. So, when I create a custom middleware, how do I register it?\n\n\nI do not know where to register middleware. Laravel 11 has been very confusing.","questionMetadata":{"type":"version","tag":"php","level":"intermediate"},"answer":"In laravel 11 you can not register in the kernel.php the middlewares anymore. However there is plenty of other way how you can register the middlewares.\n\n\nIf you want it to be triggered in every call you can append it to the `bootstrap\/app.php` .\n\n\n\n```\n->withMiddleware(function (Middleware $middleware) {\n $middleware->append(YourMiddleware::class);\n})\n\n```\n\nOtherwise you can add middlewares to specific route or route groups:\n\n\n\n```\nRoute::get('\/yourroute', function () {\n \n})->middleware(YourMiddleware::class);\n\n```\n\nIf you add a middleware to a route group but you dont want certain routes to trigger the middleware, you can use the `withoutMiddleware` method\n\n\n\n```\nRoute::middleware([YourMiddleware::class])->group(function () {\n Route::get('\/', function () {\n \/\/ ...\n });\n \n Route::get('\/yourroute', function () {\n \/\/ ...\n })->withoutMiddleware([YourMiddleware::class]);\n});\n\n```\n\nFor more information check out the official documentation: <https:\/\/laravel.com\/docs\/11.x\/middleware#registering-middleware>"}
{"questionId":"563417ff8c9231eae554d84babefbd0c911a0471f703386992af9b9de06a3d9a","question":"How do I isort using ruff?\nI often work in very small projects which do not have config file. How do I use `ruff` in place of `isort` to sort the imports? I know that the following command is roughly equivalent to `black`:\n\n\n\n```\nruff format .\n\n```\n\nThe format command do not sort the imports. How do I do that?","questionMetadata":{"type":"implementation","tag":"python","level":"beginner"},"answer":"According to the [documentation](https:\/\/docs.astral.sh\/ruff\/formatter\/#sorting-imports):\n\n\n\n> \n> Currently, the Ruff formatter does not sort imports. In order to both sort imports and format, call the Ruff linter and then the formatter:\n> \n> \n> \n\n\n\n```\nruff check --select I --fix .\nruff format ."}
{"questionId":"3ca59f96793a6dd039e0d053ec047a1b9b8a760e15d091b054c72681f43018f4","question":"How to get efficient floating point maximum in Rust\nI was testing how to get the maximum for an array of floating points:\n\n\n\n```\npub fn max(n: [f64;8]) -> f64 {\n IntoIterator::into_iter(n).reduce(|a,b| a.max(b)).unwrap()\n}\n\n```\n\nwhich gives me (nightly Rust)\n\n\n\n```\n vmovsd xmm0, qword ptr [rdi + 56]\n vmovsd xmm1, qword ptr [rdi + 48]\n vmovsd xmm2, qword ptr [rdi + 40]\n vmovsd xmm3, qword ptr [rdi + 32]\n vmovsd xmm4, qword ptr [rdi + 24]\n vmovsd xmm5, qword ptr [rdi + 16]\n vmovsd xmm6, qword ptr [rdi]\n vmovsd xmm7, qword ptr [rdi + 8]\n vcmpunordsd xmm8, xmm6, xmm6\n vmaxsd xmm6, xmm7, xmm6\n vblendvpd xmm6, xmm6, xmm7, xmm8\n vcmpunordsd xmm7, xmm6, xmm6\n vmaxsd xmm6, xmm5, xmm6\n vblendvpd xmm5, xmm6, xmm5, xmm7\n vcmpunordsd xmm6, xmm5, xmm5\n vmaxsd xmm5, xmm4, xmm5\n vblendvpd xmm4, xmm5, xmm4, xmm6\n vcmpunordsd xmm5, xmm4, xmm4\n vmaxsd xmm4, xmm3, xmm4\n vblendvpd xmm3, xmm4, xmm3, xmm5\n vcmpunordsd xmm4, xmm3, xmm3\n vmaxsd xmm3, xmm2, xmm3\n vblendvpd xmm2, xmm3, xmm2, xmm4\n vcmpunordsd xmm3, xmm2, xmm2\n vmaxsd xmm2, xmm1, xmm2\n vblendvpd xmm1, xmm2, xmm1, xmm3\n vcmpunordsd xmm2, xmm1, xmm1\n vmaxsd xmm1, xmm0, xmm1\n vblendvpd xmm0, xmm1, xmm0, xmm2\n ret\n\n```\n\nSo I spend a lot of time with NaN handling. I am pretty sure that `vmaxsd` does the same as `f64::max` in Rust, but I am not sure if I overlook something.\n\n\nSo I turned to C++ and got\n\n\n\n```\ndouble max(double *num) {\n double sum = num[0];\n for (int i = 1; i < 8; i++) {\n sum = std::max(sum, num[i]);\n }\n return sum;\n}\n\n```\n\nwhich compiles to (on gcc 14.1)\n\n\n\n```\n vmovsd xmm2, QWORD PTR [rdi]\n vmovsd xmm1, QWORD PTR [rdi+8]\n vmaxsd xmm0, xmm1, xmm2\n vmovsd xmm1, QWORD PTR [rdi+16]\n vmovsd xmm2, QWORD PTR [rdi+24]\n vmaxsd xmm1, xmm1, xmm0\n vmaxsd xmm0, xmm2, xmm1\n vmovsd xmm2, QWORD PTR [rdi+32]\n vmaxsd xmm1, xmm2, xmm0\n vmovsd xmm2, QWORD PTR [rdi+40]\n vmaxsd xmm0, xmm2, xmm1\n vmovsd xmm2, QWORD PTR [rdi+48]\n vmaxsd xmm1, xmm2, xmm0\n vmovsd xmm0, QWORD PTR [rdi+56]\n vmaxsd xmm0, xmm0, xmm1\n ret\n\n```\n\n(no fast-math option, just -O3)\n\n\nwhich leads me to believe the assembly from Rust is sub-optimal or the semantics of C++ max and Rust max are different.\n\n\nCan someone shed some light on this issue?\nAnd how could I emit the same code as C++ with Rust here?","questionMetadata":{"type":"optimization","tag":"rust","level":"intermediate"},"answer":"The documentation of [`f64::max`](https:\/\/doc.rust-lang.org\/std\/primitive.f64.html#method.max) tells us:\n\n\n\n> \n> If one of the arguments is NaN, then the other argument is returned\n> \n> \n> \n\n\nSo it only produces `NaN` when *both* arguments are `NaN`.\n\n\nBut [`std::max`](https:\/\/en.cppreference.com\/w\/cpp\/algorithm\/max) uses `<` for comparision which can produce `NaN` if only one of the operands is `NaN`. Similarly [`MAXSD`](https:\/\/www.felixcloutier.com\/x86\/maxsd) always returns the second operand when either is `NaN` and thus also can return `NaN` with only one (the second) operand being `NaN`:\n\n\n\n```\nMAX(SRC1, SRC2)\n{\n IF ((SRC1 = 0.0) and (SRC2 = 0.0)) THEN DEST := SRC2;\n ELSE IF (SRC1 = NaN) THEN DEST := SRC2; FI;\n ELSE IF (SRC2 = NaN) THEN DEST := SRC2; FI;\n ELSE IF (SRC1 > SRC2) THEN DEST := SRC1;\n ELSE DEST := SRC2;\n FI;\n}\n\n```\n\nSo while [`MAXSD`](https:\/\/www.felixcloutier.com\/x86\/maxsd) and C++s [`std::max`](https:\/\/en.cppreference.com\/w\/cpp\/algorithm\/max) have compatible semantics, Rusts [`f64::max`](https:\/\/doc.rust-lang.org\/std\/primitive.f64.html#method.max) is not compatible:\n\n\n\n```\nstd::cout << std::max(nan, 1.0) << \" \" << std::max(1.0, nan); \/\/ \u2192 nan 1\n\n```\n\n\n```\nprintln!(\"{} {}\", f64::max(nan, 1.0), f64::max(1.0, nan)); \/\/ \u2192 1 1\n\n```\n\nUsing the same semantics in Rust [produces equivalent assembly](https:\/\/rust.godbolt.org\/#g:!((g:!((g:!((h:codeEditor,i:(filename:%271%27,fontScale:14,fontUsePx:%270%27,j:1,lang:rust,selection:(endColumn:2,endLineNumber:3,positionColumn:2,positionLineNumber:3,selectionStartColumn:2,selectionStartLineNumber:3,startColumn:2,startLineNumber:3),source:%27pub+fn+max(n:+%5Bf64%3B8%5D)+-%3E+f64+%7B%0A++++n.into_iter().reduce(%7Ca,b%7C+if+a+%3C+b+%7B+b+%7D+else+%7B+a%7D).unwrap()%0A%7D%27),l:%275%27,n:%270%27,o:%27Rust+source+%231%27,t:%270%27)),k:55.201254573967596,l:%274%27,m:100,n:%270%27,o:%27%27,s:0,t:%270%27),(g:!((h:compiler,i:(compiler:nightly,filters:(b:%270%27,binary:%271%27,binaryObject:%271%27,commentOnly:%270%27,debugCalls:%271%27,demangle:%270%27,directives:%270%27,execute:%271%27,intel:%270%27,libraryCode:%270%27,trim:%271%27,verboseDemangling:%270%27),flagsViewOpen:%271%27,fontScale:14,fontUsePx:%270%27,j:1,lang:rust,libs:!(),options:%27-C+opt-level%3D3%27,overrides:!((name:edition,value:%272021%27)),selection:(endColumn:1,endLineNumber:1,positionColumn:1,positionLineNumber:1,selectionStartColumn:1,selectionStartLineNumber:1,startColumn:1,startLineNumber:1),source:1),l:%275%27,n:%270%27,o:%27+rustc+nightly+(Editor+%231)%27,t:%270%27),(h:output,i:(compilerName:%27rustc+nightly%27,editorid:1,fontScale:14,fontUsePx:%270%27,j:1,wrap:%271%27),l:%275%27,n:%270%27,o:%27Output+of+rustc+nightly+(Compiler+%231)%27,t:%270%27)),k:44.798745426032404,l:%274%27,n:%270%27,o:%27%27,s:0,t:%270%27)),l:%272%27,n:%270%27,o:%27%27,t:%270%27)),version:4):\n\n\n\n```\npub fn max(n: [f64;8]) -> f64 {\n n.into_iter().reduce(|a,b| if a < b { b } else { a }).unwrap()\n}\n\n```\n\n\n```\nexample::max::h17b765fea01ee3b1:\n movsd xmm0, qword ptr [rdi + 56]\n movsd xmm1, qword ptr [rdi + 48]\n movsd xmm2, qword ptr [rdi + 40]\n movsd xmm3, qword ptr [rdi + 32]\n movsd xmm4, qword ptr [rdi + 24]\n movsd xmm5, qword ptr [rdi + 8]\n maxsd xmm5, qword ptr [rdi]\n movsd xmm6, qword ptr [rdi + 16]\n maxsd xmm6, xmm5\n maxsd xmm4, xmm6\n maxsd xmm3, xmm4\n maxsd xmm2, xmm3\n maxsd xmm1, xmm2\n maxsd xmm0, xmm1\n ret"}
{"questionId":"6bc504cb9d26335373b247783119cea66aa741744583beac9779a517b3a95501","question":"Issue with none-ls configuration error with eslint\\_d\nI'm configuring neovim with none-ls and when I'm trying to add eslint\\_d to the setup I have this error :\n\n\n**[null-ls] failed to load builtin eslint\\_d for method diagnostics; please check your config**\n\n\nHere is what my none-ls.lua file looks like\n\n\n\n```\n return {\n \"nvimtools\/none-ls.nvim\",\n config = function()\n local null_ls = require(\"null-ls\")\n\n null_ls.setup({\n sources = {\n null_ls.builtins.formatting.stylua,\n null_ls.builtins.formatting.prettier,\n null_ls.builtins.diagnostics.eslint_d,\n },\n })\n \n vim.keymap.set(\"n\", \"<leader>gf\", vim.lsp.buf.format, {})\n end,\n }\n\n\n```\n\nI only have an issue with eslint\\_d (I tried eslint-lsp too, same issue)\n\n\nI have installed eslint\\_d with Mason (even tried to uninstall and install it again)\nI have installed eslint\\_d globally using npm\nI have checked none-ls' documentation and it looks like it should work\n\n\nDoes anyone know what could be the issue?\nThanks a lot!","questionMetadata":{"type":"debugging","tag":"lua","level":"intermediate"},"answer":"This is probably a consequence of the changes announced here: <https:\/\/github.com\/nvimtools\/none-ls.nvim\/discussions\/81>.\n\n\nYou can get the code actions\/diagnostics for `eslint_d` from [none-ls-extras](https:\/\/github.com\/nvimtools\/none-ls-extras.nvim\/tree\/main) (don't forget to add this dependency to your `null-ls` installation).\n\n\nYour config should be updated to something similar to this:\n\n\n\n```\nlocal null_ls = require(\"null-ls\")\n\nnull_ls.setup {\n sources = {\n require(\"none-ls.diagnostics.eslint_d\"), \n ...\n }\n}"}
{"questionId":"2db655aa026a592ff80278707364ba0241c5d107bdc8b392a61d196f3bbc72ea","question":"docker-compose run issue 2024: Error: 'ContainerConfig'\nI have what seems like a very strange issue that I hope someone has hit before.\n\n\nI have a docker-compose file that houses a service for redis. Nothing special, I just grab the latest redis from docker hub. I went in to redeploy today and I normally run `--force-recreate` to down\/up the containers, but when I attempt to run `--force-recreate` today, I am getting weird errors I have not seen before (and this worked fine yesterday).\n\n\nStrangely enough though, running normal down\/up commands works and there is no issue. Am I missing something?\n\n\nHere are the commands that work to down\/up my system without errors:\n\n\n\n```\ndocker-compose -f docker-compose.prod.yml down \ndocker-compose -f docker-compose.prod.yml up -d\n\n```\n\nHere is the command that should be fine, but it fails with 'ContainerConfig' errors for redis' docker-compose:\n\n\n\n```\ndocker-compose -f docker-compose.prod.yml up -d --force-recreate\n\n```\n\nOutput...\n\n\n\n```\ndocker-compose -f docker-compose.prod.yml up -d --force-recreate\nRecreating app_redis_1 ...\n\nERROR: for app_redis_1 'ContainerConfig'\n\nTraceback (most recent call last):\n File \"docker-compose\", line 3, in <module>\n File \"compose\/cli\/main.py\", line 81, in main\n File \"compose\/cli\/main.py\", line 203, in perform_command\n File \"compose\/metrics\/decorator.py\", line 18, in wrapper\n File \"compose\/cli\/main.py\", line 1186, in up\n File \"compose\/cli\/main.py\", line 1182, in up\n File \"compose\/project.py\", line 702, in up\n File \"compose\/parallel.py\", line 108, in parallel_execute\n File \"compose\/parallel.py\", line 206, in producer\n File \"compose\/project.py\", line 688, in do\n File \"compose\/service.py\", line 581, in execute_convergence_plan\n File \"compose\/service.py\", line 503, in _execute_convergence_recreate\n File \"compose\/parallel.py\", line 108, in parallel_execute\n File \"compose\/parallel.py\", line 206, in producer\n File \"compose\/service.py\", line 496, in recreate\n File \"compose\/service.py\", line 615, in recreate_container\n File \"compose\/service.py\", line 334, in create_container\n File \"compose\/service.py\", line 922, in _get_container_create_options\n File \"compose\/service.py\", line 962, in _build_container_volume_options\n File \"compose\/service.py\", line 1549, in merge_volume_bindings\n File \"compose\/service.py\", line 1579, in get_container_data_volumes\nKeyError: 'ContainerConfig'\n[88001] Failed to execute script docker-compose\n\n```\n\nHere is the simple `docker-compose` config for the redis service:\n\n\n\n```\nversion: '3.8'\n\nservices:\n\n redis:\n image: redis:latest\n restart: always\n ports:\n - \"6379\"","questionMetadata":{"type":"version","tag":"other","level":"intermediate"},"answer":"As noted in countless posts here, and as noted in the comments by @ChrisBecke this was a cause of depreciated commands in Docker. It is now 2024 and things have updated.\n\n\nFor whatever reason, `--force-recreate` with `docker-compose` now fails on my production system after system updates, while `docker compose up -d --force-recreate` works as expected. (Notice the removal of `-`)\n\n\nWeird thing is I host on DigitalOcean and did run updates the other day via `apt-get...` and I did notice docker being updated...but this error was not easy to figure out the cause which is why I asked here. It also hasn't effected my staging env so not sure what the cause for the original commands not working is..."}
{"questionId":"92b9fffa5c4528d411206c6dbd26efa6538457dbcc8dc646d0517f34578266f4","question":"Breaking brace after the start keyword\nThis is the simple program, it prints a terminal size when you resize it:\n\n\n\n```\n#!\/bin\/env raku\nuse NCurses;\ninitscr;\nloop {\n given getch() {\n when 27 { last } # exit when pressing esc\n when KEY_RESIZE {\n mvprintw 0, 0, \"{LINES}x{COLS}\";\n nc_refresh\n }\n }\n}\n\nendwin;\n\n```\n\nLet's add `await start { ... }` around the loop:\n\n\n\n```\nawait start { loop {\n given getch() {\n ### code\n }\n} }\n\n```\n\nNow this program doesn't work properly: it doesn't print anything when I resize a terminal, but prints the size when I press any key. Note that it still handles esc press correctly.\n\n\nFinally, let's remove the curly braces and the program will work as it should again:\n\n\n\n```\nawait start loop {\n given getch() {\n ### code\n }\n}\n\n```\n\nWhat is this dirty magic with `start` and braces?","questionMetadata":{"type":"conceptual","tag":"raku","level":"intermediate"},"answer":"A `loop` statement (at block level) iterates immediately. By contrast, a `loop` expression produces a lazy generator.\n\n\nWhen the `start` block has curly braces, then it's clearly a `loop` statement, because it's in a block. Thus the loop executes on another thread.\n\n\nWithout them, the compiler is considering it a `loop` expression (it's an interesting question whether it should). The `start` schedules the work on another thread, but the work it schedules is merely producing a lazy generator, not actually doing any iteration. Thus it completes immediately, and the `await` produces the lazy generator. Since the `await` is in sink (void) context, the generator is iterated. Note that since this is after the `await`, then the loop is executing on the main thread.\n\n\nSo only the curly form actually executes off the main thread, and it would appear that doesn't sit well with the NCurses library. I'm afraid I've no insights into why that would be."}
{"questionId":"bbce6ce5d46efea4be89560fee1e527d3898136ef959a00a47f081e50810b9cf","question":"Split array of integers into subarrays with the biggest sum of difference between min and max\nI'm trying to find the algorithm efficiently solving this problem:\n\n\n\n> \n> Given an unsorted array of numbers, you need to divide it into several subarrays of length from a to b, so that the sum of differences between the minimum and maximum numbers in each of the subarrays is the greatest. The order of the numbers must be preserved.\n> \n> \n> Examples:\n> \n> \n> \n> ```\n> a = 3, b = 7\n> input: [5, 8, 4, 5, 1, 3, 5, 1, 3, 1]\n> answer: [[5, 8, 4], [5, 1, 3], [5, 1, 3, 1]] (diff sum is 12)\n> \n> a = 3, b = 4\n> input: [1, 6, 2, 2, 5, 2, 8, 1, 5, 6]\n> answer: [[1, 6, 2], [2, 5, 2, 8], [1, 5, 6]] (diff sum is 16)\n> \n> a = 4, b = 5\n> input: [5, 8, 4, 5, 1, 3, 5, 1, 3, 1, 2]\n> answer: splitting is impossible\n> \n> ```\n> \n> \n\n\nThe only solution I've come up with so far is trying all of the possible subarray combinations.\n\n\n\n```\nfrom collections import deque\n\ndef partition_array(numbers, min_len, max_len):\n max_diff_subarray = None\n\n queue = deque()\n\n for end in range(min_len - 1, max_len):\n if end < len(numbers):\n diff = max(numbers[0:end + 1]) - min(numbers[0:end + 1])\n queue.append(Subarray(previous=None, start=0, end=end, diff_sum=diff))\n\n while queue:\n subarray = queue.popleft()\n\n if subarray.end == len(numbers) - 1:\n if max_diff_subarray is None:\n max_diff_subarray = subarray\n elif max_diff_subarray.diff_sum < subarray.diff_sum:\n max_diff_subarray = subarray\n continue\n\n start = subarray.end + 1\n\n for end in range(start + min_len - 1, start + max_len):\n if end < len(numbers):\n diff = max(numbers[start:end + 1]) - min(numbers[start:end + 1])\n queue.append(Subarray(previous=subarray, start=start, end=end, diff_sum=subarray.diff_sum + diff))\n else:\n break\n\n return max_diff_subarray\n\nclass Subarray:\n def __init__(self, previous=None, start=0, end=0, diff_sum=0):\n self.previous = previous\n self.start = start\n self.end = end\n self.diff_sum = diff_sum\n\nnumbers = [5, 8, 4, 5, 1, 3, 5, 1, 3, 1]\na = 3\nb = 7\nresult = partition_array(numbers, a, b)\nprint(result.diff_sum)\n\n```\n\nAre there any more time efficient solutions?","questionMetadata":{"type":"optimization","tag":"python","level":"intermediate"},"answer":"First let's solve a simpler problem. Let's run through an array, and give mins and maxes for all windows of fixed size.\n\n\n\n```\ndef window_mins_maxes (size, array):\n min_values = deque()\n min_positions = deque()\n max_values = deque()\n max_positions = deque()\n\n for i, value in enumerate(array):\n if size <= i:\n yield (i, min_values[0], max_values[0])\n if min_positions[0] <= i - size:\n min_values.popleft()\n min_positions.popleft()\n\n if max_positions[0] <= i - size:\n max_values.popleft()\n max_positions.popleft()\n\n while 0 < len(min_values) and value <= min_values[-1]:\n min_values.pop()\n min_positions.pop()\n min_values.append(value)\n min_positions.append(i)\n\n while 0 < len(max_values) and max_values[-1] <= value:\n max_values.pop()\n max_positions.pop()\n max_values.append(value)\n max_positions.append(i)\n\n yield (len(array), min_values[0], max_values[0])\n\n```\n\nThis clearly takes memory `O(size)`. What's less obvious is that it takes time `O(n)` to process an array of length `n`. But we can see that with amortized analysis. To each element we'll attribute the cost of checking the possible value that is smaller than it, the cost of some later element checking that it should be removed, and the cost of being added. That accounts for all operations (though this isn't the order that they happen) and is a fixed amount of work per element.\n\n\nAlso note that the memory needed for this part of the solution fits within `O(n)`.\n\n\nSo far I'd consider this a well-known dynamic programming problem. Now let's make it more challenging.\n\n\nWe will tackle the partition problem as a traditional dynamic programming problem. We'll build up an array `best_weight` of the best partition to that point, and `prev_index` of the start of the previous partition ending just before that point.\n\n\nTo build it up, we'll use the above algorithm to take a previous partition and add one of `min_len` to it. If it is better than the previous, we'll save its information in those arrays. We'll then scan forward from that partition and do that up to `max_len`. Then we move on to the next possible start of a partition.\n\n\nWhen we're done we'll find the answer from that code.\n\n\nHere is what that looks like:\n\n\n\n```\ndef partition_array(numbers, min_len, max_len):\n if max_len < min_len or len(numbers) < min_len:\n return (None, None)\n\n best_weight = [None for _ in numbers]\n prev_index = [None for _ in numbers]\n\n # Need an extra entry for off of the end of the array.\n best_weight.append(None)\n prev_index.append(None)\n\n best_weight[0] = 0\n\n for i, min_value, max_value in window_mins_maxes(min_len, numbers):\n window_start_weight = best_weight[i - min_len]\n if window_start_weight is not None:\n j = i\n while j - i < max_len - min_len and j < len(numbers):\n new_weight = window_start_weight + max_value - min_value\n if best_weight[j] is None or best_weight[j] < new_weight:\n best_weight[j] = new_weight\n prev_index[j] = i - min_len\n\n if numbers[j] < min_value:\n min_value = numbers[j]\n if max_value < numbers[j]:\n max_value = numbers[j]\n j += 1\n\n # And fill in the longest value.\n new_weight = window_start_weight + max_value - min_value\n if best_weight[j] is None or best_weight[j] < new_weight:\n best_weight[j] = new_weight\n prev_index[j] = i - min_len\n\n if best_weight[-1] is None:\n return (None, None)\n else:\n path = [len(numbers)]\n while prev_index[path[-1]] is not None:\n path.append(prev_index[path[-1]])\n path = list(reversed(path))\n partitioned = [numbers[path[i]:path[i+1]] for i in range(len(path)-1)]\n return (best_weight[-1], partitioned)\n\n```\n\nNote that we do `O(1)` work for each possible start and length. And so that is time `O((max_len + 1 - min_len)*n)`. And the data structures we used are all bounded above by `O(n)` in size. Giving the overall efficiency that I promised in the comments.\n\n\nNow let's test it.\n\n\n\n```\nprint(partition_array([5, 8, 4, 5, 1, 3, 5, 1, 3, 1], 3, 7))\nprint(partition_array([1, 6, 2, 2, 5, 2, 8, 1, 5, 6], 3, 4))\nprint(partition_array([5, 8, 4, 5, 1, 3, 5, 1, 3, 1, 2], 4, 5))\n\n```\n\nAnd the output is:\n\n\n\n```\n(12, [[5, 8, 4], [5, 1, 3], [5, 1, 3, 1]])\n(16, [[1, 6, 2], [2, 5, 2, 8], [1, 5, 6]])\n(None, None)"}
{"questionId":"33bf68c54e4ad8cf9ecaf1fb5d8c736f2c75c2ed42d4f735841d590ab0fb2567","question":"Where did sys.modules go?\n>>> import sys\n>>> del sys.modules['sys']\n>>> import sys\n>>> sys.modules\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: module 'sys' has no attribute 'modules'\n\n```\n\nWhy does re-imported `sys` module not have some attributes anymore?\n\n\nI am using Python 3.12.3 and it happens in macOS, Linux, and Windows. It happens in both the REPL and in a .py script. It does not happen in Python 3.11.","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"This is pretty obviously something you shouldn't do, naturally liable to break things. It happens to break things this particular way on the Python implementation you tried, but Python doesn't promise what will happen. Most of what I am about to say is implementation details.\n\n\n\n\n---\n\n\nThe `sys` module cannot be initialized like normal built-in modules, as it's responsible for so much core functionality. Instead, on interpreter startup, Python creates the `sys` module with the special function [`_PySys_Create`](https:\/\/github.com\/python\/cpython\/blob\/v3.12.3\/Python\/sysmodule.c#L3565). This function is responsible for (part of the job of) correctly setting up the `sys` module, including the `sys.modules` attribute:\n\n\n\n```\n if (PyDict_SetItemString(sysdict, \"modules\", modules) < 0) {\n goto error;\n }\n\n```\n\nWhen you do `del sys.modules['sys']`, the import system loses track of the `sys` module. When you try to import it again, Python tries to create an entirely new `sys` module, and it does so as if `sys` were an ordinary built-in module. It goes through the procedure for initializing ordinary built-in modules. This procedure leaves the new `sys` module in an inconsistent, improperly initialized state, as `sys` was never designed to be initialized this way.\n\n\nThere *is* support for *reloading* `sys`, although I believe the dev team is thinking of taking this support out - the use cases are very obscure, and the only one I can think of off the top of my head is obsolete. Part of the reinitialization ends up hitting a [code path](https:\/\/github.com\/python\/cpython\/blob\/v3.12.3\/Python\/import.c#L1260-L1284) intended for reloading `sys`, which updates its `__dict__` from a copy created early in the original initialization of `sys`, [right before `sys.modules` is set](https:\/\/github.com\/python\/cpython\/blob\/v3.12.3\/Python\/sysmodule.c#L3590-L3597):\n\n\n\n```\n interp->sysdict_copy = PyDict_Copy(sysdict);\n if (interp->sysdict_copy == NULL) {\n goto error;\n }\n\n if (PyDict_SetItemString(sysdict, \"modules\", modules) < 0) {\n goto error;\n }\n\n```\n\nThis copy is handled differently on earlier Python versions, hence the version-related behavior differences."}
{"questionId":"cb598e60a996b9f7eb7ff960ba27599a5168377bc352b3af0ec6f56be68acd52","question":"langchain\\_community & langchain packages giving error: Missing 1 required keyword-only argument: 'recursive\\_guard'\nAll of sudden `langchain_community` & `langchain` packages started throwing error:\nTypeError: ForwardRef.\\_evaluate() missing 1 required keyword-only argument: 'recursive\\_guard'\n\n\nThe error getting generated somewhere in `pydantic`\n\n\nI strongly suspect it is version mismatch. So I tried upgrading packages langchain, langchain\\_community, pydantic, langsmith etc. But no luck.\n\n\nMy current installed versions shows as under:\n\n\n\n```\nPython 3.12.4\n\nlangchain: 0.2.3\nlangchain_community: 0.2.4\nlangsmith: 0.1.75\npydantic: 2.7.3\ntyping_extensions: 4.11.0\n\n```\n\n`Pip check` also not showing any conflict.\n\n\nHere is complete trace of error. Any help would be really appreciated.\n\n\n\n```\nTypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'\n\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\streamlit\\runtime\\scriptrunner\\script_runner.py\", line 600, in _run_script\n exec(code, module.__dict__)\nFile \"C:\\MyProject\\MyScript.py\", line 20, in <module>\n from langchain_community.vectorstores import Chroma\nFile \"<frozen importlib._bootstrap>\", line 1412, in _handle_fromlist\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_community\\vectorstores\\__init__.py\", line 509, in __getattr__\n module = importlib.import_module(_module_lookup[name])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.1264.0_x64__qbz5n2kfra8p0\\Lib\\importlib\\__init__.py\", line 90, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_community\\vectorstores\\chroma.py\", line 20, in <module>\n from langchain_core.documents import Document\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\documents\\__init__.py\", line 6, in <module>\n from langchain_core.documents.compressor import BaseDocumentCompressor\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\documents\\compressor.py\", line 6, in <module>\n from langchain_core.callbacks import Callbacks\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\callbacks\\__init__.py\", line 22, in <module>\n from langchain_core.callbacks.manager import (\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\callbacks\\manager.py\", line 29, in <module>\n from langsmith.run_helpers import get_run_tree_context\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langsmith\\run_helpers.py\", line 40, in <module>\n from langsmith import client as ls_client\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langsmith\\client.py\", line 52, in <module>\n from langsmith import env as ls_env\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langsmith\\env\\__init__.py\", line 3, in <module>\n from langsmith.env._runtime_env import (\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langsmith\\env\\_runtime_env.py\", line 10, in <module>\n from langsmith.utils import get_docker_compose_command\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langsmith\\utils.py\", line 31, in <module>\n from langsmith import schemas as ls_schemas\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langsmith\\schemas.py\", line 69, in <module>\n class Example(ExampleBase):\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\pydantic\\v1\\main.py\", line 286, in __new__\n cls.__try_update_forward_refs__()\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\pydantic\\v1\\main.py\", line 807, in __try_update_forward_refs__\n update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns, (NameError,))\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\pydantic\\v1\\typing.py\", line 554, in update_model_forward_refs\n update_field_forward_refs(f, globalns=globalns, localns=localns)\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\pydantic\\v1\\typing.py\", line 520, in update_field_forward_refs\n field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"C:\\Users\\lenovo\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\pydantic\\v1\\typing.py\", line 66, in evaluate_forwardref\n return cast(Any, type_)._evaluate(globalns, localns, set())\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"I am having the same issue. The stack is different, but the error comes from the same line pydantic\\v1\\typing.py\", line 66\n\n\nThis is referring to the python typing module (v3.12.4) that has an additional mandatory parameter 'recursive\\_guard'. There are other areas of the code in pydantic where this has been fixed (recursive\\_gurard=set()).\n\n\nCheck this out --> <https:\/\/github.com\/pydantic\/pydantic-core\/issues\/1292>\n\n\nWithin this thread, they mention that using python v3.12.3 could temporarily solve the issue in 1292, probably because this additional attribute in v3.12.4 (I am guessing here). This is not an option for me as my google alpha functions local deploy is not recognizing the --runtime=python311 and always take the latest runtime (v3.12.4).\n\n\nI hope that they fix this too"}
{"questionId":"139a4e1224a5b5a12079fc42048b3d051b82135183dcc320778cc90ec60976a9","question":"Laravel 11 - Disable CSRF for a route\nI have a route that serves as a webhook endpoint that gets called by a remote service, but the calls that the service makes to the webhook always fail.\n\n\nAfter some inspection of the service logs, I learned that the service is getting an HTTP error code 419.\n\n\nI used to add exceptions inside the `$except` property of the `App\\Http\\Middleware\\VerifyCsrfToken` middleware, However, I'm on Laravel 11 and I can't find this middleware anymore. What is the solution to this problem?","questionMetadata":{"type":"version","tag":"php","level":"intermediate"},"answer":"Starting from Laravel 11, the `VerifyCsrfToken` middleware no longer exists within the application's skeleton.\n\n\nInstead, you can specify which routes should bypass the CSRF verification process using the `validateCsrfTokens()` method. You can call this method inside the `withMiddleware()` method callback within your `bootstrap\/app.php` file. For example:\n\n\n\n```\n<?php\n\nuse Illuminate\\Foundation\\Application;\nuse Illuminate\\Foundation\\Configuration\\Middleware;\n\nreturn Application::configure(basePath: dirname(__DIR__))\n ->withRouting(\n web: __DIR__.'\/..\/routes\/web.php',\n commands: __DIR__.'\/..\/routes\/console.php',\n channels: __DIR__.'\/..\/routes\/channels.php',\n health: '\/up',\n )\n ->withMiddleware(function (Middleware $middleware) {\n $middleware->validateCsrfTokens(except: [\n 'you-webhook-endpoint\/action-name' \/\/ <-- exclude this route\n ]);\n })->create();\n\n```\n\nMore information available at the documentation at: <https:\/\/laravel.com\/docs\/11.x\/csrf#csrf-excluding-uris>\n\n\n**Update:** You can also call the static `except()` method on the `VerifyCsrfToken` middleware class inside the `boot()` method of your `AppServiceProvider` class as following:\n\n\n\n```\n<?php\n\nnamespace App\\Providers;\n\nuse Illuminate\\Support\\ServiceProvider;\n\nclass AppServiceProvider extends ServiceProvider\n{\n public function boot(): void\n {\n \\Illuminate\\Foundation\\Http\\Middleware\\VerifyCsrfToken::except([\n 'submit'\n ]);\n }\n}"}
{"questionId":"c8103287eb062f333b114bb0a0f580d2ad835b535c9b9c1b42a1ab1becc5d766","question":"Expo stuck on: \"Welcome to Expo\" screen\nSo I've just installed Expo SDK51: `\"expo\": \"~51.0.8\",` and want to build something. But I have a weird error or bug that I only see the \"Welcome to Expo\" large text with a prompt to create a file in \"app\" dir. Although this should not occus since when I look up for the text it is nowhere in the files and the app dir already has the basic tab bar files with parallax. I have never seen this occur and I have no idea how to get out of this.\nYes I have tried clean run, yes I've tried restarting and so on... this is just weird.","questionMetadata":{"type":"version","tag":"javascript","level":"intermediate"},"answer":"Did your project use `react-native-dotenv` or any other env Babel plugins? According to the [Expo SDK 51 changelog](https:\/\/expo.dev\/changelog\/2024\/05-07-sdk-51):\n\n\n\n> \n> **react-native-dotenv is not compatible with expo-router.** If you are using the `react-native-dotenv` Babel plugin, it will overwrite `expo-router` configuration environment variables and you'll see the empty state \"Welcome to Expo\" screen. We are tracking the incomptibility in [expo\/expo#28933](https:\/\/github.com\/expo\/expo\/issues\/28933), but we recommend removing the library and Babel plugin, and instead using Expo CLI's built-in support for .env files ([learn more](https:\/\/docs.expo.dev\/guides\/environment-variables\/)).\n> \n> \n> \n\n\nI had the problem when I upgraded my project from Expo SDK 49 to Expo SDK 51, so I erased `module:react-native-dotenv` from babel.config.js and removed the library, but it turned out I also had `babel-plugin-transform-inline-environment-variables`, so I had to remove that too. After refactoring to use Expo environment variables, my project worked fine.\n\n\nHope this helps :)"}
{"questionId":"df0c8fa477d417b9d6449506a94c4287d6fb130651df3c04cc6847bc3cc2408f","question":"Is it possible to append subroutines to a Raku module at runtime?\nI would like to be able to add a `sub` to a module `Foo` at runtime.\n\n\nIn Perl, I would do something like:\n\n\n\n```\n*{'My::Module::foo'} = \\sub { 'FOO!' };\n\n```\n\nI know Raku doesn't have TypeGlobbing like Perl. Ideally it would be something like:\n\n\n\n```\nuse MONKEY;\n\nmodule Foo {};\n\nFoo.^add-sub('foo', sub { 'FOO!' });\n\n```\n\nIs this possible?","questionMetadata":{"type":"conceptual","tag":"raku","level":"intermediate"},"answer":"module Foo {}\nFoo::<&added-sub> = sub { 99 }\nsay Foo::added-sub; # 99\n\n```\n\nSee also:\n\n\n- [My answer to **Can't interpolate variable in another namespace**](https:\/\/stackoverflow.com\/questions\/65513383\/cant-interpolate-variable-in-another-namespace-raku\/65516514#65516514:%7E:text=Statically%20specified%20packages%2C%20dynamically%20specified%20variable).\n- [My nanswer to **Symbols Created in Stash at Runtime Not Available in PseudoStash**](https:\/\/stackoverflow.com\/a\/68886261\/1077672)."}
{"questionId":"683d3175d0df157e486e056ff182c59258d89e840a11ada6aa0f3df26b241b81","question":"Why is std::nextafter not constant expression?\nWhy code below has no problem with a2 but does not compile for z1?\n\n\n\n```\n#include <cmath> \/\/ std::nextafter\n#include <limits> \/\/ std::numeric_limits\n\nint main ()\n{\n constexpr float a1 {1.f};\n constexpr float a2 {std::nextafter(a1, std::numeric_limits<float>::max())};\n constexpr float z0 {0.f};\n constexpr float z1 {std::nextafter(z0, std::numeric_limits<float>::max())};\n \n return 0;\n}\n\n```\n\nCompiled with GCC 13.2\n\n\n\n```\nIn file included from <source>:1:\n\/opt\/compiler-explorer\/gcc-13.2.0\/include\/c++\/13.2.0\/cmath: In function 'int main()':\n<source>:9:39: in 'constexpr' expansion of 'std::nextafter(((float)z0), std::numeric_limits<float>::max())'\n\/opt\/compiler-explorer\/gcc-13.2.0\/include\/c++\/13.2.0\/cmath:2417:32: error: '__builtin_nextafterf(0.0f, 3.40282347e+38f)' is not a constant expression\n 2417 | { return __builtin_nextafterf(__x, __y); }\n\n```\n\nSo GCC compiled a2 correctly but is unable to compile z1.\n\n\nNote:\nClang 14.0 and MSVC 19.38 have problems even with a2.","questionMetadata":{"type":"version","tag":"c++","level":"intermediate"},"answer":"libstdc++13.2.0 does not seem to implement [`std::nextafter()`](https:\/\/github.com\/gcc-mirror\/gcc\/blob\/releases\/gcc-13.2.0\/libstdc%2B%2B-v3\/include\/tr1\/cmath#L894-L910) as `constexpr` yet.\n\n\nGCC turns `std::nextafter(a1, std::numeric_limits<float>::max())` into a constant value. This is a lucky exceptional case when compilers can evaluate rare expressions as constexpr even if they are not marked as such.\n\n\nGCC can't turn `std::nextafter(z0, std::numeric_limits<float>::max())` into a constant value, it turns `__builtin_nextafterf()` into the call to the C function `nextafterf()` that can't be constexpr. See <https:\/\/godbolt.org\/z\/v7KMrTdfW>\n\n\nThe code can be simplified to\n\n\n\n```\n#include <cmath> \/\/ std::nextafter\n#include <limits> \/\/ std::numeric_limits\n\nint main ()\n{\n constexpr float a2 {std::nextafter(1.f, std::numeric_limits<float>::max())};\n constexpr float z1 {std::nextafter(0.f, std::numeric_limits<float>::max())};\n return 0;\n}\n\n```\n\nwith the similar Assembler result: <https:\/\/godbolt.org\/z\/KE9vGd6j7>"}
{"questionId":"0510ab8e206bbb69b1ea93ecbd25a29d5005538feaa7e76afbb6c20b7f73c47d","question":"pdf.js pdfjs-dist Promise.withResolvers is not a function\nI'm trying to extract data from pdf files and return it. here's the code in the serverside in astro\n\n\n\n```\nimport * as pdfjsLib from \"pdfjs-dist\";\npdfjsLib.GlobalWorkerOptions.workerSrc = \"..\/..\/node_modules\/pdfjs-dist\/build\/pdf.worker.mjs\";\n\nexport const contentExtractor = async (arrayBufferPDF: ArrayBuffer): Promise<string> => {\n const pdf = (pdfjsLib).getDocument(arrayBufferPDF);\n return pdf.promise.then(async (pdf) => {\n let totalContent = \"\"\n const maxPages = pdf._pdfInfo.numPages;\n\n for (let pageNumber = 1; pageNumber <= maxPages; pageNumber++) {\n const page = await pdf.getPage(pageNumber);\n const pageContent = await page.getTextContent();\n const content = pageContent.items.map((s: any) => s.str).join(\" \")\n totalContent = totalContent + content\n }\n return totalContent\n })\n}\n\n```\n\nand the error is\n\n\n\n```\n12:44:40 [ERROR] Promise.withResolvers is not a function\n Stack trace:\n at \/Users\/some-user\/Documents\/Projects\/Github\/pdf-extractor\/app\/node_modules\/pdfjs-dist\/build\/pdf.mjs:3026:32\n [...] See full stack trace in the browser, or rerun with --verbose.\n\n```\n\nI don't understand where the problem is. Could someone help me with it?","questionMetadata":{"type":"debugging","tag":"javascript","level":"intermediate"},"answer":"The build of PDF.js you are using does not support running in Node.js (i.e. only in the browser). The error comes from [`Promise.withResolvers` being called, which is not supported by Node.js](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/Promise\/withResolvers#browser_compatibility).\n\n\nIt [seems](https:\/\/github.com\/mozilla\/pdf.js\/issues\/18006), the recommended way to run it under Node.js is to use the [legacy build](https:\/\/github.com\/mozilla\/pdf.js\/wiki\/Frequently-Asked-Questions#faq-support) (using `pdfjs-dist\/legacy\/build\/pdf.js`)."}
{"questionId":"859a8141f2cb01408940d14e2f001fd1d46ee82223ed3550b78c00647536c4bc","question":"mypy Cannot infer type argument. Difference between List and Iterable\nT = TypeVar(\"T\", bound=Union[str, int])\n\n\ndef connect_lists(list_1: list[T], list_2: list[T]) -> list[T]:\n out: list[T] = []\n out.extend(list_1)\n out.extend(list_2)\n return out\n\nconnect_lists([1, 2], [\"a\", \"b\"])\n\n```\n\nmypy:\n\n\n\n> \n> error: Cannot infer type argument 1 of \"connect\\_lists\" [misc]\n> \n> \n> \n\n\n\n```\nT = TypeVar(\"T\", bound=Union[str, int])\n\n\ndef connect_lists(list_1: Iterable[T], list_2: Iterable[T]) -> list[T]:\n out: list[T] = []\n out.extend(list_1)\n out.extend(list_2)\n return out\n\nconnect_lists([1, 2], [\"a\", \"b\"])\n\n```\n\nNow mypy doesn't raise an error.\n\n\nWhat is the difference between List and Iterable in this case?","questionMetadata":{"type":"conceptual","tag":"python","level":"intermediate"},"answer":"Iterable` is covariant - an `Iterable[int]` is also an `Iterable[int|str]`.\n\n\n`list` is not covariant - a `list[int]` is not a `list[int|str]`, because you can add strings to a `list[int|str]`, which you can't do with a `list[int]`.\n\n\nmypy infers the types of `[1, 2]` and `[\"a\", \"b\"]` as `list[int]` and `list[str]` respectively. With the first definition of `connect_objects`, there is no choice of `T` that will make the call valid. But with the second definition, a `list[int]` is an `Iterable[int]`, which is an `Iterable[int|str]`, and a `list[str]` is similarly also an `Iterable[int|str]`, so `T` is inferred as `int|str`.\n\n\nI don't think there's actually a spec yet for how type inference works. There may never be such a spec. Future mypy versions might perform this inference differently, for example, performing context-sensitive inference to infer a type of `list[int|str]` for both input lists, making the first version of the code pass type checking."}
{"questionId":"f2f8eb3eaacd97ad37c972df6a4c9d967717eaa9529ae733c00c1b545d67b5f9","question":"Prevent jobs on the cluster from running on production code during deployment\nI have a script that runs for a few minutes as a job on the cluster in the production environment. There are between 0 and 100 such jobs, each with 1 script per job, running at the same time on the cluster. Usually, there are no jobs running, or a burst of about 4-8 such jobs.\n\n\n**I want to prevent such jobs from running when I deploy a new version of the code into production.**\n\n\n**How do I do that to optimize maintainability?**\n\n\nMy initial idea was this:\n\n\n1. Use a semaphore file or a lock file that is created at the beginning of deployment and then removed after the code has been deployed. Deploy runs for 0.5 - 10 min, depending on the complexity of the current deploy tasks.\n2. This lock file is also automatically deleted by a *separate* cron job after, for example, 30 min, if deploy fails to remove this file. For example, if the deploy in rudely killed, this file should not hang around forever blocking the jobs. That is, the file is deleted by a separate cron job if it is older than 30 minutes.\n3. The production code checks for this lock file and waits until it is gone. So the jobs wait no more than 30 min.\n\n\nI am concerned about possible race conditions, and considering maybe using a database-based solution. In the case of my application, I would use postgreSQL. This database-based solution may be more complex to implement and maintain, but may be less probe to race conditions.\n\n\nPerhaps there is a standard mechanism to achieve this in Capistrano, which is used for deployment of this code?\n\n\n**Notes:**\n\n\n- When you answer the question, please compare **maintainability** of your suggested solution with that of the simple solution I propose above (using lock files)\n- I am not sure if I need to take the **race conditions** into account. That is, is this system (with lock files) really race condition-prone? Or is it an unlikely possibility?\n\n\n**FAQs:**\n\n\n\n> \n> Is there a particular reason these jobs shouldn't run during deployment?\n> \n> \n> \n\n\nI had cases when multiple jobs would run during mid-deployment, and fail because of that. Finding and rerunning such failed jobs is time-consuming. Delaying them during deployment carries only a small and rare performance hit, and is by far the most acceptable solution. For our system, maintainability is priority number one.","questionMetadata":{"type":"implementation","tag":"sql","level":"intermediate"},"answer":"Working with advisory locks at simplest level using `psql`.\n\n\nSession 1\n\n\n\n```\nselect pg_advisory_lock(3752667);\n\n```\n\nContents of advisory\\_lock\\_test.sql file:\n\n\n\n```\nselect pg_advisory_lock(3752667);\nselect \"VendorID\" from nyc_taxi_pl limit 10;\n\n```\n\nThen session 2:\n\n\n\n```\npsql -d test -U postgres -p 5452 -f advisory_lock_text.sql \nNull display is \"NULL\".\n\n```\n\nThen in session 1:\n\n\n\n```\nselect pg_advisory_unlock(3752667);\n\n```\n\nBack to session 2:\n\n\n\n```\nNull display is \"NULL\".\n pg_advisory_lock \n------------------\n \n(1 row)\n\n VendorID \n----------\n 1\n 2\n 2\n 2\n 2\n 2\n 1\n 1\n 2\n 2\n(10 rows)\n\n```\n\n**Note**:\n\n\n\n> \n> The below is using session level locks. Transaction locks are also available using `pg_advisory_xact_lock`\n> \n> \n> \n\n\nBasically you create a lock in a session with `pg_advisory_lock(3752667)` where the number can be one 64 bit integer of two 32 bit integers. These could come from values that you fetch from a table so a number is scoped to a particular action e.g. `select pg_advisory_lock((select lock_number from a_lock where action = 'deploy'));`. Then in the second or other sessions you try to acquire a lock on the same number. If the number is in use, not unlocked or the original session did not exit, the other sessions will wait until the original session releases the lock. At that point the rest of the commands will run.\n\n\nIn your case create a number, possibly in a table, that is associated with deploying. When you run the deployments lock on the number before you run the changes, then unlock at end of deployment. If the deployment fails and the session ends the lock will also be released The other scripts would also need to start with attempting to lock on that number also. If it is in use they will wait until it is released and then run the rest of the script commands and unlock. How manageable this is depends on the number of scripts you are dealing with and getting people to stick to the process."}
{"questionId":"6a83dd2f95a746b9af7d71a71aa2d84564d6d443e991b57f8befefdbe0428631","question":"Locally stored images not working after upgrade to expo 51\nAfter upgrading from Expo SDK 49 to 51 my images are broken. The images are stored locally in the assets folder of the project and I use expo-image.\n\n\n\n```\n\"dependencies\": {\n\"@react-native-async-storage\/async-storage\": \"1.23.1\",\n\"@react-navigation\/bottom-tabs\": \"^6.5.7\",\n\"@react-navigation\/native\": \"^6.1.6\",\n\"@react-navigation\/native-stack\": \"^6.9.12\",\n\"expo\": \"^51.0.8\",\n\"expo-application\": \"~5.9.1\",\n\"expo-av\": \"~14.0.5\",\n\"expo-constants\": \"~16.0.1\",\n\"expo-device\": \"~6.0.2\",\n\"expo-font\": \"~12.0.5\",\n\"expo-image-picker\": \"~15.0.5\",\n\"expo-intent-launcher\": \"~11.0.1\",\n\"expo-linear-gradient\": \"~13.0.2\",\n\"expo-localization\": \"~15.0.3\",\n\"expo-notifications\": \"~0.28.3\",\n\"expo-secure-store\": \"~13.0.1\",\n\"expo-splash-screen\": \"~0.27.4\",\n\"expo-status-bar\": \"~1.12.1\",\n\"expo-updates\": \"~0.25.14\",\n...\n\"expo-image\": \"~1.12.9\"\n\n```\n\n},\n\n\nAn example on how I fetch the images.\n\n\n\n```\n\/assets\n\/screens\n homescreen.tsx <- snip below is from here\npackage.json\n\n<Image\n contentFit=\"contain\"\n style={styles.headerImage}\n source={require(\"..\/assets\/images\/ArtHomePage1.png\")}\n \/>\n\n```\n\nI went over the documentation and the changelogs between 49 to 50 and 50 to 51 and I couldn't find anything back regarding a change to the images or local files.","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"This was a dependency issue caused by the upgrade. I saw the response on <https:\/\/stackoverflow.com\/a\/78523103\/1174076> and this is it. I had run the doctor before, but somehow it must have not worked, so I ran it again:\n\n\n\n```\nnpx expo-doctor@latest\n\n```\n\nAnd 3 dependencies were listed as requiring an update, most notably `expo` itself:\n\n\n\n```\nDetailed check results:\n\nThe following packages should be updated for best compatibility with the installed expo version:\n [email protected] - expected version: ~51.0.11\n [email protected] - expected version: ~0.27.5\n [email protected] - expected version: 0.74.2\nYour project may not work correctly until you install the expected versions of the packages.\nFound outdated dependencies\nAdvice: Use 'npx expo install --check' to review and upgrade your dependencies.\n\n```\n\nSo running `npx expo install --check` fixed it."}
{"questionId":"c8471dfc8f60a3de55264b5e9c3978a47aa74bf3abc492f76aaade1118869be5","question":"How to hide implementation details in C++ modules?\nI am new to C++ modules. I am using Visual Studio 2022.\n\n\nLet's say I am creating a DLL, and I don't want my users to see implementation details of my class. How can I achieve this in C++ modules? Mostly I am seeing examples over the Internet, implementing functions in the same file.\n\n\nCan I safely say, module interfaces are similar to header files & module implementation is similar to cpp file?\n\n\nHere is my example implementation, is this correct?\n\n\nAnimalParent.ixx\n\n\n\n```\nexport module Animal:Parent;\n\nexport class Animal\n{\npublic:\n virtual void say();\n};\n\n```\n\nAnimalParent.cpp\n\n\n\n```\nmodule Animal:Parent;\n\n#include <iostream>\n\nvoid Animal::say()\n{\n std::cout << \"I am Animal\" << std::endl;\n}\n\n```\n\nAnimal.cat.ixx\n\n\n\n```\nexport module Animal:Cat;\n\nimport :Parent;\n\nexport class Cat :public Animal\n{\npublic:\n void say() override;\n};\n\n```\n\nAnimalCat.cpp\n\n\n\n```\nmodule Animal:Cat;\n\n#include <iostream>\n\nvoid Cat::say()\n{\n std::cout << \"I am cat\" << std::endl;\n}\n\n```\n\nAnimal.ixx\n\n\n\n```\nexport module Animal;\n\nexport import :Parent;\nexport import :Cat;\n\n```\n\nQuestions:\n\n\n1. Is this implementation correct?\n2. Can I safely assume that files contain `export module name(.ixx extension)` - similar to header files & `module name` - similar to respective source file?\n3. If I shift this DLL to my customer, what all should I give? My DLL and the folder containing `.ixx` files?\n4. How will they integrate my DLL? In the current header files system, they will refer to the header files directory in `Additional include directories`, and link to the lib. Here, do they have to refer to the folder containing `.ixx` files in `Additional include directories` & link the lib?","questionMetadata":{"type":"implementation","tag":"c++","level":"intermediate"},"answer":"question 1 & 2\uff1a Yes.\n\n\nquestion 3.\n\n\nTo use module in dll, you need to export the symbol using `__declspec(dllexport)` .\n\n\nAnimalParent.ixx\n\n\n\n```\nexport module Animal:Parent;\nexport class __declspec(dllexport) Animal\n{\npublic:\n virtual void say();\n};\n\n```\n\nAnimal.cat.ixx\n\n\n\n```\nexport module Animal:Cat;\n\nimport :Parent;\nexport class __declspec(dllexport) Cat :public Animal\n{\npublic:\n void say() override;\n};\n\n```\n\n\n> \n> what all should I give\n> \n> \n> \n\n\n1. DLL and Lib files.\n2. `ModuleName.ixx.ifc` files. These files are generated by Visual Studio after building the DLL, similar to compiled header files. You can find them in the obj folder of your project (vcxproj project folder\/x64\/debug or release\/).\n\n\nFor your project, there are three files: `AnimalParent.ixx.ifc, Animal.cat.ixx.ifc, Animal.ixx.ifc`.\n\n\nquestion 4: how to use the module\n\n\n1.Link the lib file.\n\n\n2.Import ifc files:\nIn project properties-> C\/C++ -> Command Line -> Additional options:\n\n\n\n```\n\/reference \"<path> \\AnimalParent.ixx.ifc\" \n\/reference \"<path> \\Animal.cat.ixx.ifc,\" \n\/reference \"<path> \\Animal.ixx.ifc\" \n\n```\n\nCode: C++ 20 standard\n\n\n\n```\nimport Animal;\nint main()\n{\n Animal Ani;\n Ani.say();\n Cat cat;\n cat.say();\n}"}
{"questionId":"fe0a2b2cddcf1cd951245b66e5426bf20ed3b5f0ca15b194fff193c9f4040b31","question":"Unable to locate the api.php route file in Laravel 11\nI am attempting to integrate Laravel 11 with React.js for data retrieval and transmission between the two. However, I cannot locate the `routes\/api.php` file in the latest version of Laravel.\n\n\nI have searched for others experiencing the same issue, but I have yet to find any similar cases since Laravel 11 was only released a week ago.","questionMetadata":{"type":"version","tag":"php","level":"beginner"},"answer":"<https:\/\/laravel.com\/docs\/11.x\/routing#api-routes>\n\n\nIf your application will also offer a stateless API, you may enable API routing using the `install:api` Artisan command:\n\n\n\n```\nphp artisan install:api\n\n```\n\n[...] In addition, the `install:api` command creates the `routes\/api.php` file."}
{"questionId":"f1020dbf6b727903d7ec2868c770fafefc15a82d12cf4c54357a9a045b24b37d","question":"gitlab-runner update failed with GPG error signatures were invalid\nI\u2019m unable to update my Gitlab-runner install due to bad keys being detected. Is this a Gitlab update issue or something gone wrong on my system? Update and install was working without problems in 2023.\n\n\n\n```\nroot@gitlab-runner:~# apt-get update\nHit:1 http:\/\/security.debian.org bookworm-security InRelease\nHit:2 http:\/\/deb.debian.org\/debian bookworm InRelease\nGet:3 https:\/\/packages.gitlab.com\/runner\/gitlab-runner\/debian bookworm InRelease [23.3 kB]\nErr:3 https:\/\/packages.gitlab.com\/runner\/gitlab-runner\/debian bookworm InRelease\n The following signatures were invalid: EXPKEYSIG 3F01618A51312F3F GitLab B.V. (package repository signing key) <[email protected]>\nFetched 23.3 kB in 1s (21.0 kB\/s)\nReading package lists... Done\nW: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https:\/\/packages.gitlab.com\/runner\/gitlab-runner\/debian bookworm InRelease: The following signatures were invalid: EXPKEYSIG 3F01618A51312F3F GitLab B.V. (package repository signing key) <[email protected]>\nW: Failed to fetch https:\/\/packages.gitlab.com\/runner\/gitlab-runner\/debian\/dists\/bookworm\/InRelease The following signatures were invalid: EXPKEYSIG 3F01618A51312F3F GitLab B.V. (package repository signing key) <[email protected]>\nW: Some index files failed to download. They have been ignored, or old ones used instead.\n\n```\n\nMany suggest to add gitlab apt gpg key like this\n\n\n\n```\nroot@gitlab-runner:~# curl -s https:\/\/packages.gitlab.com\/gpg.key | apt-key add -\nOK\n\n```\n\nStill it does not resolve the issue on Debian 12 and Ubuntu 22. Same error on apt update.","questionMetadata":{"type":"version","tag":"bash","level":"intermediate"},"answer":"To resolve this situation in 2024, especially on old installs, first we need to remove already added gitlab apt gpg key (`EXPKEYSIG 3F01618A51312F3F`).\n\n\nRun the command:\n\n\n\n```\nsudo apt-key del \"F640 3F65 44A3 8863 DAA0 B6E0 3F01 618A 5131 2F3F\"\n\n```\n\nand run latest **gitlab runner** install script:\n\n\n\n```\ncurl -L \"https:\/\/packages.gitlab.com\/install\/repositories\/runner\/gitlab-runner\/script.deb.sh\" | sudo bash && sudo apt update\n\n```\n\nThat's it, now you can do `apt upgrade`.\n\n\nUpdate from comment below, if you have the same type of issue with self hosted **gitlab-ce**, please run this instead:\n\n\n\n```\ncurl -L \"https:\/\/packages.gitlab.com\/install\/repositories\/gitlab\/gitlab-ce\/script.deb.sh\" | sudo bash && sudo apt update\n\n```\n\nMore details:\n\n\nNote that apt-key on Debian 12 is obsolete:\n\n\n\n```\nroot@gitlab-runner:~# apt-key list\nWarning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).\n\n```\n\nSo proper way in general should be to put dearmored gpg signature to \/etc\/apt\/trusted.gpg.d, but its not a gitlab case.\n\n\nIf you look at `\/etc\/apt\/sources.list.d\/runner_gitlab-runner.list` file, you will notice gpg key mentioned directly:\n\n\n\n```\n# this file was generated by packages.gitlab.com for\n# the repository at https:\/\/packages.gitlab.com\/runner\/gitlab-runner\n\ndeb [signed-by=\/usr\/share\/keyrings\/runner_gitlab-runner-archive-keyring.gpg] https:\/\/packages.gitlab.com\/runner\/gitlab-runner\/debian\/ bookworm main\ndeb-src [signed-by=\/usr\/share\/keyrings\/runner_gitlab-runner-archive-keyring.gpg] https:\/\/packages.gitlab.com\/runner\/gitlab-runner\/debian\/ bookworm main\n\n```\n\nThis is the reason, why manually adding gpg key with apt-key does not resolve the issue.\nExecuting install script again, would deploy latest key signature."}
{"questionId":"610eadf3eb83b2c3b29703d917cd58b3b1bd8599b860b0022d354da8b6c6e27b","question":"ReactNative App build failing with Flipper error\nI have inherited a ReactNative app, that i need to get up and running in dev.\nI am completely a noob.\n\n\nI think i am close.\n\n\nWhen I run\n\n\n\n```\nnpm run ios\n\n```\n\nI get the error\n\n\n\n> \n> CompileC \/Users\/me\/Library\/Developer\/Xcode\/DerivedData\/tpp-cdzrkyfpwzsixefrnjryzmdnucct\/Build\/Intermediates.noindex\/Pods.build\/Debug-iphoneos\/FlipperKit.build\/Objects-normal\/arm64\/FlipperPlatformWebSocket.o \/Users\/me\/Projects\/tpp\/ios\/Pods\/FlipperKit\/iOS\/FlipperKit\/FlipperPlatformWebSocket.mm normal arm64 objective-c++ com.apple.compilers.llvm.clang.1\\_0.compiler (in target 'FlipperKit' from project 'Pods')\n> \n> \n> \n\n\nAfter some googling i have added in the project root the file react-native.config.js with the contents\n\n\n\n```\nmodule.exports = {\n dependencies: {\n ...(process.env.CI \/\/ or `process.env.NO_FLIPPER` for [email protected] and above\n ? { 'react-native-flipper': { platforms: { ios: null } } }\n : {}),\n},\n project: {\n ios: {},\n android: {},\n },\n};\n\n```\n\nThe last thing the article said i needed to do was\n\n\n\n> \n> You can specify NO\\_FLIPPER=1 when installing your iOS pods, to instruct React Native not to install Flipper. Typically, the command would look like this:\n> \n> \n> from the root folder of the react native project\n> \n> \n> bundle exec pod install --project-directory=ios\n> \n> \n> \n\n\nThis is where i am getting in the weeds.\n\n\nWhere does this command \"bundle exec pod install --project-directory=ios\" go, since i am running \"npm run ios\" ??","questionMetadata":{"type":"debugging","tag":"javascript","level":"intermediate"},"answer":"bundle exec pod install --project-directory=ios`\n\n\nIt is similar to `cd ios && pod install`. It means, you have to run this before `npm run ios`. You can use this instead. But for above command, you have to run this command from root directory of your project. As you can see in your folder structure there will be `ios` folder. Here the full explanation of this command:-\n\n\n`bundle exec`: This part of the command ensures that the pod command is executed within the context of a Ruby bundle. It's a way to ensure that the correct version of CocoaPods (if specified in the project's Gemfile) is used.\n\n\n`pod install`: This is the CocoaPods command that installs the dependencies specified in the Podfile of the project. It resolves dependencies and downloads the necessary libraries.\n\n\n`--project-directory=ios`: This flag specifies the directory where the Podfile is located. In this case, it's telling CocoaPods to look for the Podfile in the ios directory. This is useful in projects where the iOS code is organized into a subdirectory, commonly named ios.\n\n\nAlso the error, you are trying to solve. You have to follow these steps:-\n\n\n`Method 1:`\n\n\n\n```\nStep 1: cd ios\nStep 2: pod repo update\nStep 3: pod install\n\n```\n\nmove to method 2, if it won't work.\n\n\n`Method 2:`\n\n\nStep 1: If you are using a `react-native-flipper` your iOS build will fail when `NO_FLIPPER=1` is set.\nbecause `react-native-flipper` depends on (FlipperKit,...) that will be excluded\n\n\nTo fix this you can also exclude `react-native-flipper` using a `react-native.config.js`\n\n\n\n```\nmodule.exports = {\n ..., \/\/ other configs\n dependencies: {\n ...(process.env.NO_FLIPPER\n ? { 'react-native-flipper': { platforms: { ios: null } } }\n : {}),\n }\n};\n\n```\n\nStep 2: You have to run one of these commands from root directory of the project:\n\n\n`NO_FLIPPER=1 bundle exec pod install --project-directory=ios`\n\n\nor\n\n\n`cd ios && NO_FLIPPER=1 pod install"}
{"questionId":"bfccf258f6f57d8e97a1ab2b3508cadae7854156da4c941cb36bab62a815003a","question":"Does it make any sense to define operator< as noexcept?\nI know that it makes perfect sense to define e.g. move constructor as `noexcept` (if possible) and I think I understand the effect of that.\n\n\nBut I have never seen a similar discussion about `operator<()` of a type used e.g. in `std::set<>`.\n\n\nDoes also non-throwing comparator have some (potential) optimizing effect (when used in `std::set<>`, `std::map<>`, `std::sort()` and similar)?","questionMetadata":{"type":"conceptual","tag":"c++","level":"intermediate"},"answer":"A move constructor is somewhat special, because there is an obvious fallback. In a situation where you cannot allow a move to throw an exception you can call a `noexcept` move constructor and when the move constructor is not `noexcept` you can fallback to a copy. Hence, declaring the move constructor that does not throw exceptions as `noexcept` is a potential optimization.\n\n\n\n\n---\n\n\nFor example `std::vector::push_back` does try to give a strong exception guarantee (from cppreference):\n\n\n\n> \n> If an exception is thrown (which can be due to `Allocator::allocate()` or element copy\/move constructor\/assignment), this function has no effect (strong exception guarantee).\n> \n> \n> \n\n\nHowever, since C++11:\n\n\n\n> \n> If T's move constructor is not `noexcept` and T is not CopyInsertable into `*this`, vector will use the throwing move constructor. If it throws, the guarantee is waived and the effects are unspecified.\n> \n> \n> \n\n\nRemember that pushing an element may require reallocations, ie to copy\/move all elements to different memory location.\n\n\nThe above means: If the type is CopyInsertable, ie one has a choice between copying and moving, the `vector` will move them when the move constructor is `noexcept`. In this case you can gain performance by declaring the move constructor `noexcept`.\n\n\nWhen the element type cannot be copied the move constuctor is used in any case (and the vector relaxes its exception guarantee, thanks to Fran\u00e7ois Andrieux for making me aware of `insert` with similar balancing of exception safety vs performance). In this case what you gain by a `noexcept` move constructor is a stronger exception guarantee.\n\n\n\n\n---\n\n\n`std::vector::push_back` and similar examples is why there is so much discussion about declaring a move constructor `noexcept`.\n\n\nFor `<` there is no such obvious fallback that would be more expensive when `<` is not `noexcept`. Either you can call `<` or you can't.\n\n\nOtherwise `<` has the same advantages as any other `noexcept` method, that the compiler knows it does never throw (and if it does `std::terminate` is called, but the usual stack unwinding does not necessarily take place)."}
{"questionId":"468acde70fe6eb013853ef437ba76ede83b63399bf42a997e1dca292fe81d418","question":"BoringSSL-GRPC unsupported option '-G' for target 'arm64-apple-ios15.0'\nAfter updating to the XCode 16 Beta, when building app i get this error (in attachments), thats basically it. Is there any way to fix that or should I wait for BoringSSL update?\n\n\nI've tried pod update, changing Minimum Deployment version, it didnt helped.","questionMetadata":{"type":"version","tag":"other","level":"intermediate"},"answer":"If you are using Cocoapods this is a quick fix:\n\n\nAdd this to you Podfile ->\n\n\n\n```\npost_install do |installer|\n installer.pods_project.targets.each do |target|\n if target.name == 'BoringSSL-GRPC'\n target.source_build_phase.files.each do |file|\n if file.settings && file.settings['COMPILER_FLAGS']\n flags = file.settings['COMPILER_FLAGS'].split\n flags.reject! { |flag| flag == '-GCC_WARN_INHIBIT_ALL_WARNINGS' }\n file.settings['COMPILER_FLAGS'] = flags.join(' ')\n end\n end\n end\n end\nend\n\n```\n\nThis issue is from GRPC: <https:\/\/github.com\/grpc\/grpc\/pull\/36904>"}
{"questionId":"b67795f39043e1b7779786a0fa69d95d3ebb65c8ba4104d54af26381b098d865","question":"Is there an option to install R packages without documentation for more efficient storing?\nI am installing packages based on a Docker file in a machine with very small storage capacities.\nMy question is whether there is any way to install a more lightweight version of R packages that avoids non-critical bits of the package for deployed code such as the documentation.\nIs there a way to do this through `install.packages`?\nOtherwise, is there any other way to do it?","questionMetadata":{"type":"implementation","tag":"r","level":"beginner"},"answer":"You can manually specify what you'd like to have installed, for example:\n`R CMD INSTALL [options] <package-name>` or with `install.packages(\"package-name\", INSTALL_opts = c(\"--option1\", \"--option2\"))` where relevant options for your case might be the following:\n\n\n\n```\n --no-docs do not install HTML, LaTeX or examples help\n --no-html do not build HTML help\n --no-R, --no-libs, --no-data, --no-help, --no-demo, --no-exec,\n --no-inst\n suppress installation of the specified part of the\n package for testing or other special purposes\n --libs-only only install the libs directory\n --data-compress= none, gzip (default), bzip2 or xz compression\n to be used for lazy-loading of data\n --resave-data re-save data files as compactly as possible\n --compact-docs re-compress PDF files under inst\/doc"}
{"questionId":"ca9df21efa0e5efbc02ee205faba2a0b601d6ec8f81d44732120c448d1f9bc13","question":"Get the row number in which the value in a column changes\nI have this dataframe:\n\n\n\n```\ndat <- data.frame(Var1 = c(1,1,1,1,2,2,3,3,3,3,3,3,4,4,4,5,5,5,5))\n\n> dat\n Var1\n1 1\n2 1\n3 1\n4 1\n5 2\n6 2\n7 3\n8 3\n9 3\n10 3\n11 3\n12 3\n13 4\n14 4\n15 4\n16 5\n17 5\n18 5\n19 5\n\n```\n\nI want to get the row numbers where a new value appears for the first time.\n\n\nThe result should be:\n\n\n\n```\nc(1, 5, 7, 13, 16)\n\n```\n\n\n\n---\n\n\nI got as far as identifying the unique values with `unique(dat$Var1)` and finding the first row for one value with `min(which(dat$Var1 == 1))`, but I don't know how to combine the two without using a loop.\n\n\nIdeally I'm looking for a solution in base R, unless the solution in some package is *extremely* simple (e.g. `some.function(dat$Var1)`).","questionMetadata":{"type":"implementation","tag":"r","level":"beginner"},"answer":"If your intent is to find where a new value first appears *regardless of the order*, then I'll change your data slightly:\n\n\n\n```\ndat <- data.frame(Var1 = c(1,2,1,1,2,2,3,3,3,3,3,3,4,4,4,5,5,5,5))\n### ^ different\n\n```\n\nWith this, we'd need to know the first `1` (row 1) but not the next time the value changes to `1` (row 3).\n\n\nFor that we can use just `!duplicated`:\n\n\n\n```\nwhich(!duplicated(dat$Var1))\n# [1] 1 2 7 13 16\n### c(1,2,1,1,2,2,3,3,3,3,3,3,4,4,4,5,5,5,5)\n### ^ ^ ^ ^ ^\n\n```\n\nFor fun, if you wanted to know the *last* occurrence of each number, we can do\n\n\n\n```\nwhich(!duplicated(dat$Var1, fromLast = TRUE))\n# [1] 4 6 12 15 19\n### c(1,2,1,1,2,2,3,3,3,3,3,3,4,4,4,5,5,5,5)\n### ^ ^ ^ ^ ^\n\n```\n\nIf you want the 2nd (or `n`th) for each number, we can switch to using `ave` and `seq_along`:\n\n\n\n```\nwhich(ave(dat$Var1, dat$Var1, FUN = seq_along) == 2L)\n# [1] 3 5 8 14 17\n### c(1,2,1,1,2,2,3,3,3,3,3,3,4,4,4,5,5,5,5)\n### ^ ^ ^ ^ ^\n\n```\n\n(recognizing that this does not return when a number does not have the `n`th occurrence)."}
{"questionId":"7409356a26567ce08f5df393bab325b00058f74bff53c5d2e8fcac1daa3a8af5","question":"Possible bug with PHP PDO and with PostgreSQL\nAt the startup of the docker application (with laravel php), for 1 request, connection to database is fine. After the first request I start to get this error.\n\n\n\n```\nSQLSTATE[08006] [7] could not send SSL negotiation packet: Resource temporarily unavailable (Connection: pgsql, SQL: (select * from ........)\n\n```\n\nUsing:\n\n\n- Laravel v10 and above.\n- PHP 8.3 and above\n- Docker with Ubuntu Latest\n\n\nI tracked down this problem until I found out that PDO is actually not openning a connection to PostgreSQL. I tested it with iptraf and both pg\\_connect and PDO. When we use PDO, we get the error above and but when I try to use pg\\_connect, we can connect and even make a query.\n\n\nSo my findings are, when using iptraf\n\n\n- Cannot open a connection using PDO\n- IPTraf does not show connection openned with PDO\n- I can open a connection using pg\\_connect\n- I can open a connection from a database manager application\n- Happening on both development and production environments\n\n\n[EDIT]\nNew findings:\n\n\n- The whole setup is working on a virtual machine rather then a docker.","questionMetadata":{"type":"version","tag":"php","level":"intermediate"},"answer":"Check the php-swoole package version on your failed deployment.\nIf it is 6.0.0 probably you have here the problem.\n\n\nEdit:\nWe also have this problem, we deployed a container compiled from last week and one with the same code but compiled this week, the difference was that the swoole package had been updated from version 5x to 6.0.0, which is an alpha version. Mysteriously, this version has sneaked into the Ubuntu repository, not being recommended for production and its changelog indicates several changes and incompatibilities with PDO.\n\n\nFrom [php pecl](https:\/\/pecl.php.net\/package\/swoole\/6.0.0)\n\n\n- No longer supports Swoole\\Coroutine\\PostgreSQL coroutine client.\n- Swoole-v6.0.0-alpha is a test version and cannot be used in any production environment; it is for testing purposes only.\n\n\nHOW TO SOLVE IT:\nRemove swoole if you don't need it.\nIf you need it, right now the previous version is not listed on the repo, so you need to get it alternatively."}
{"questionId":"911d92afe03acf95f68bd62bd4b4da1ba71460539e64286dc73ad170939945a2","question":"Is it well defined to cast to an identical layout with const members?\nConsider the following type:\n\n\n\n```\ntemplate<typename T> struct View\n{\n T* data;\n size_t size;\n};\n\n```\n\nIs it valid to cast from: `View<T>&` To `View<const T>&`?\n\n\nDoes it invoke undefined behaviour?\n\n\nI suspect the cast will work as intended on all compilers but will be technically undefined behaviour according to the standard due to the **strict aliasing rule**.","questionMetadata":{"type":"conceptual","tag":"c++","level":"intermediate"},"answer":"View<T>` and `View<const T>` are **unrelated types** and therefore casting a reference from the first to the second violates the strict aliasing rules and causes **UB** (undefined behavior).\n\n\nIt is true that is it valid to cast a `T*` to `const T*` (because these are related types), but this is not relevant when casting a reference to a class containing such members (`View`).\n\n\nIt might work as you expect on your compiler, but it is still undefined behavior by the standard and so I would not advise to rely on it.\n\n\nAs @Fran\u00e7oisAndrieux [commented above](https:\/\/stackoverflow.com\/questions\/78711356\/is-it-well-defined-to-cast-to-an-identical-layout-with-const-members#comment138774990_78711356), you can cope with it by adding a converting constructor or conversion operator to allow converting from `View<T>` to `View<const T>`."}
{"questionId":"51772b145b04e327a5859c90d3c58b61903646eee30444f5b6bf7cfc4d1ba4ac","question":"Translate Pandas groupby plus resample to Polars in Python\nI have this code that generates a toy DataFrame (production df is much complex):\n\n\n\n```\nimport polars as pl\nimport numpy as np\nimport pandas as pd\n\ndef create_timeseries_df(num_rows):\n date_rng = pd.date_range(start='1\/1\/2020', end='1\/01\/2021', freq='T')\n data = {\n 'date': np.random.choice(date_rng, num_rows),\n 'category': np.random.choice(['A', 'B', 'C', 'D'], num_rows),\n 'subcategory': np.random.choice(['X', 'Y', 'Z'], num_rows),\n 'value': np.random.rand(num_rows) * 100\n }\n df = pd.DataFrame(data)\n df = df.sort_values('date')\n df.set_index('date', inplace=True, drop=False)\n df.index = pd.to_datetime(df.index)\n\n return df\n\nnum_rows = 1000000 # for example\ndf = create_timeseries_df(num_rows)\n\n```\n\nThen perform this transformations with Pandas.\n\n\n\n```\ndf_pd = df.copy()\ndf_pd = df_pd.groupby(['category', 'subcategory'])\ndf_pd = df_pd.resample('W-MON')\ndf_pd.agg({\n 'value': ['sum', 'mean', 'max', 'min']\n}).reset_index()\n\n```\n\nBut, obviously it is quite slow with Pandas (at least in production). Thus, I'd like to use Polars to speed up time. This is what I have so far:\n\n\n\n```\n#Convert to Polars DataFrame\ndf_pl = pl.from_pandas(df)\n\n#Groupby, resample and aggregate\ndf_pl = df_pl.group_by(['category', 'subcategory'])\ndf_pl = df_pl.group_by_dynamic('date', every='1w', closed='right')\ndf_pl.agg([\n pl.col('value').sum().alias('value_sum'),\n pl.col('value').mean().alias('value_mean'),\n pl.col('value').max().alias('value_max'),\n pl.col('value').min().alias('value_min')\n])\n\n```\n\nBut I get `AttributeError: 'GroupBy' object has no attribute 'group_by_dynamic'`. Any ideas on how to use `groupby` followed by `resample` in Polars?","questionMetadata":{"type":"debugging","tag":"python","level":"intermediate"},"answer":"You can pass additional columns to group by in a call to `group_by_dynamic` by passing a list with the named argument `group_by=`:\n\n\n\n```\ndf_pl = df_pl.group_by_dynamic(\n \"date\", every=\"1w\", closed=\"right\", group_by=[\"category\", \"subcategory\"]\n)\n\n```\n\nWith this, I get a dataframe that looks similar to the one your pandas code produces:\n\n\n\n```\nshape: (636, 7)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 category \u2506 subcategory \u2506 date \u2506 sum \u2506 mean \u2506 max \u2506 min \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2506 --- \u2502\n\u2502 str \u2506 str \u2506 datetime[ns] \u2506 f64 \u2506 f64 \u2506 f64 \u2506 f64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 D \u2506 Z \u2506 2019-12-30 00:00:00 \u2506 55741.652346 \u2506 50.399324 \u2506 99.946595 \u2506 0.008139 \u2502\n\u2502 D \u2506 Z \u2506 2020-01-06 00:00:00 \u2506 76161.42206 \u2506 50.139185 \u2506 99.96917 \u2506 0.138366 \u2502\n\u2502 D \u2506 Z \u2506 2020-01-13 00:00:00 \u2506 80222.894298 \u2506 49.581517 \u2506 99.937069 \u2506 0.117216 \u2502\n\u2502 D \u2506 Z \u2506 2020-01-20 00:00:00 \u2506 82042.968995 \u2506 50.456931 \u2506 99.981101 \u2506 0.009077 \u2502\n\u2502 D \u2506 Z \u2506 2020-01-27 00:00:00 \u2506 82408.144078 \u2506 49.494381 \u2506 99.954734 \u2506 0.023769 \u2502\n\u2502 \u2026 \u2506 \u2026 \u2506 \u2026 \u2506 \u2026 \u2506 \u2026 \u2506 \u2026 \u2506 \u2026 \u2502\n\u2502 B \u2506 Z \u2506 2020-11-30 00:00:00 \u2506 79530.963748 \u2506 49.737939 \u2506 99.973554 \u2506 0.007446 \u2502\n\u2502 B \u2506 Z \u2506 2020-12-07 00:00:00 \u2506 80050.524653 \u2506 49.566888 \u2506 99.975546 \u2506 0.003066 \u2502\n\u2502 B \u2506 Z \u2506 2020-12-14 00:00:00 \u2506 77896.578291 \u2506 50.029915 \u2506 99.969098 \u2506 0.033222 \u2502\n\u2502 B \u2506 Z \u2506 2020-12-21 00:00:00 \u2506 76490.507942 \u2506 49.636929 \u2506 99.953563 \u2506 0.021683 \u2502\n\u2502 B \u2506 Z \u2506 2020-12-28 00:00:00 \u2506 46964.533378 \u2506 50.553857 \u2506 99.653981 \u2506 0.042546 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518"}
{"questionId":"0641855784ebc78fff3d1fea23959113ff6f09fc771b2c31451dbbfc5a6cf17d","question":"DVTPlugInQuery: Requested but did not find extension point with identifier 'Xcode.InterfaceBuilderBuildSupport.PlatformDefinition'\n> \n> DVTPlugInQuery: Requested but did not find extension point with identifier 'Xcode.InterfaceBuilderBuildSupport.PlatformDefinition'. This is programmer error; code should only request extension points that are defined by itself or its dependencies.\n> \n> \n> \n\n\nI am getting this warning while building a react native application. How do I resolve this?","questionMetadata":{"type":"debugging","tag":"other","level":"intermediate"},"answer":"It seems like if I don't touch an RN project for a couple of weeks, I get an error like this. I had this exact same issue. I tried a couple of related question\/answers with no luck ([Can't build react-native app due to errant recursive path issue](https:\/\/stackoverflow.com\/questions\/77914290\/cant-build-react-native-app-due-to-errant-recursive-path-issue)) and ([PBXCp error ..... is longer than filepath buffer size (1025)](https:\/\/stackoverflow.com\/questions\/35619276\/pbxcp-error-is-longer-than-filepath-buffer-size-1025\/44255203#44255203)), but I don't believe these are related to our problem. I fixed this particular issue the same way I've been fixing a lot of XCode issues:\n\n\nAssuming you are on Mac OS X:\n\n\n1. Close XCode and the Simulator.\n2. Go to Activity Monitor and force quit any \"simulator\" tasks. I had two different ones running.\n3. Open Settings->Storage Settings, then select \"Developer\", then delete \"Xcode Caches\"\n\n\nAfter this, try again -- the application should build and the simulator should open successfully."}
{"questionId":"cb19c4225c6a101d1409a4b4deb754518c57865d5910a3f1484fa84959bb11c9","question":"Class template static field initialization different results on GCC and Clang\nConsider the following C++ code example:\n\n\n\n```\n#include<iostream>\n\nclass B {\npublic:\n int val;\n\n B(int v):val(v){\n std::cout<< val << \"\\n\";\n }\n};\n\ntemplate<typename T>\nclass A {\npublic:\n static B b;\n};\n\ntemplate<typename T>\nB A<T>::b = B(1);\n\nint main() {\n A<int>::b.val;\n return 0;\n}\n\n\n```\n\nOn GCC 11.4.0, build with `g++ main.cc -g` got output `1`.\n\n\nOn Clang 14.0.0, build with `clang++ main.cc -g` got `segfault`.\n\n\nUbuntu 22.04.4 LTS\n\n\nCannot understand the reason of such behavior, would be very grateful for any help.","questionMetadata":{"type":"version","tag":"c++","level":"intermediate"},"answer":"The program has undefined behavior because you use `std::cout` without having any guarantee that it is initialized.\n\n\nThe standard streams like `std::cout` are not automatically initialized and usable. Instead the header `<iostream>` behaves as if it declared a global static storage duration variable of type `std::ios_base::Init`. When a variable of this type is initialized, it will initialize the standard streams and make them usable.\n\n\nThe initialization of the static storage duration variable `A<int>::b` is *dynamic initialization*, because the constructor of `B` is not `constexpr` and even if it was `constexpr` still, because it calls a non-`constexpr` `operator<<`. And because it is a non-local variable instantiated from a template, it has *unordered dynamic initialization*, meaning that its initialization is unordered with any other dynamic initialization of non-local variables.\n\n\nBecause the initialization of `A<int>::b` is unordered with initialization of `<iostream>`'s `std::ios_base::Init` instance, it may happen prior to the initialization of `std::cout`.\n\n\nTo assure that the streams are initialized when you need to use them before `main` is entered, you need to initialize a `std::ios_base::Init` instance yourself:\n\n\n\n```\nB(int v):val(v){\n std::ios_base::Init _;\n std::cout<< val << \"\\n\";\n}\n\n```\n\nOr better, avoid global variables with dynamic initialization before `main` and instead use local static storage duration variables which are initialized at their first point of use from `main`. See e.g. the Meyers' singleton pattern."}
{"questionId":"5ff2af5417fe3599a4981ff4e942279263e3425669d341189513622f26152641","question":"New column, sampled from list, based on column value\nvalues = [1,2,3,2,3,1]\ncolors = ['r','g','b']\nexpected_output = ['r', 'g', 'b', 'g', 'b', 'r'] # how to create this in pandas?\n\ndf = pd.DataFrame({'values': values})\ndf['colors'] = expected_output\n\n```\n\nI want to make a new column in my dataframe where the colors are selected based on values in an existing column. I remember doing this in xarray with a vectorised indexing trick, but I can't remember if the same thing is possible in pandas. It feels like it should be a basic indexing task.\n\n\nThe current answers are a nice start, thanks! They take a bit too much advantage of the numerical nature of \"values\" though. I'd rather something generic that would also work if say\n\n\n\n```\nvalues = ['a', 'b', 'c', 'b', 'c', 'a']\n\n```\n\nI guess the \"map\" method probably still works.","questionMetadata":{"type":"implementation","tag":"python","level":"beginner"},"answer":"**Code**\n\n\nuse numpy indexing\n\n\n\n```\nimport numpy as np\ndf['colors'] = np.array(colors)[df['values'] - 1]\n\n```\n\ndf\n\n\n\n```\n values color\n0 1 r\n1 2 g\n2 3 b\n3 2 g\n4 3 b\n5 1 r\n\n```\n\nIf you want to solve this problem using only Pandas, use `map` function. (with @Onyambu comment)\n\n\n\n```\nm = dict(enumerate(colors, 1))\ndf['colors'] = df['values'].map(m)"}
{"questionId":"cf430ffc91f22376a0f3d861ceff199962d74c74ca61a9ebd9911c3d2deb69bf","question":"AWK equivalent to `read -r \\_ \\_ remainder\nLet's say that you have a file which contains N whitespace-delimited columns and an additional column which has spaces in it that you want to keep.\n\n\nExample with N = 2:\n\n\n\n```\n1.1 1.2 data for row1\n 2.1 2.2 data for row2\n? ? data for row3\n \\ * data for row4\n\n```\n\nI would like to output:\n\n\n\n```\ndata for row1\ndata for row2\ndata for row3\ndata for row4\n\n```\n\nIn the shell you can do it easily with:\n\n\n\n```\nwhile read -r _ _ data\ndo\n printf \"%s\\n\" \"$data\"\ndone < data.txt\n\n```\n\nBut with `awk` it's kind of difficult. Is there a method in `awk` for splitting only the first N columns?","questionMetadata":{"type":"implementation","tag":"awk","level":"intermediate"},"answer":"The premise of the awk language is that there should only be constructs to do things that aren't easy to do with other constructs to keep the language concise and so avoid the language bloat that some other tools\/languages suffer from. e.g. some people like that perl has many unique language constructs to do anything you could possible want to do while others express their opposing view of the language in cartoons like <https:\/\/www.zoitz.com\/comics\/perl_small.png>.\n\n\nThis is just one of the many things that it'd be nice to have a function to do, but it's so easy to code whatever you actually need to do to skip a couple of fields for any specific input it'd just be cluttering up the language if a function existed to do it and if we had a function for THIS there are 100s of other functions that should also be created to do all of the other things it'd just be nice to have a function to do.\n\n\nUsing GNU awk for `\\s\/\\S` shorthand\n\n\n\n```\n$ awk 'sub(\/^\\s*(\\S+\\s+){2}\/,\"\")' file\ndata for row1\ndata for row2\ndata for row3\ndata for row4\n\n```\n\nand the same with any POSIX awk:\n\n\n\n```\n$ awk 'sub(\/^[[:space:]]*([^[:space:]]+[[:space:]]+){2}\/,\"\")' file\ndata for row1\ndata for row2\ndata for row3\ndata for row4\n\n```\n\nNote that the awk output from above would retain any trailing white space, unlike a shell read loop.\n\n\nBoth of those rely on the `FS` being the default blank character but are easily modified for any other `FS` that can be negated in a bracket expression (or opposite character class).\n\n\nNote that the entire approach relies on being able to negate the `FS` in a bracket expression so it wouldn't work if the `FS` was some arbitrary regexp or even a multi-char string but then neither would the shell read loop you're asking to duplicate the function of.\n\n\nIf you do happen to have a `FS` you can't just negate in a bracket expression, e.g. if your fields are separated by 3 digits or 2 punctuation characters so you have something like:\n\n\n\n```\n$ echo 'abc345def;%ghi+klm;%nop345qrs' |\n awk -v FS='[[:digit:]]{3}|[[:punct:]]{2}' '{for (i=1; i<=NF; i++) print i, $i}'\n1 abc\n2 def\n3 ghi+klm\n4 nop\n5 qrs\n\n```\n\nthen here's a more general approach using GNU awk for the 4th arg to `split()`:\n\n\n\n```\n$ echo 'abc345def;%ghi+klm;%nop345qrs' |\n awk -v FS='[[:digit:]]{3}|[[:punct:]]{2}' '{\n split($0,f,FS,s)\n print substr( $0, length(s[0] f[1] s[1] f[2] s[2]) + 1 )\n }'\nghi+klm;%nop345qrs"}
{"questionId":"b1df3bd7d72641261e66f9be4f207ea3145d9e4e5323d48789f7c5c96d629c3c","question":"Problem getting Mojolicious routes to work\nThe following route definition works nicely\n\n\n\n```\nsub load_routes {\n my($self) = @_;\n\n my $root = $self->routes;\n\n $root->get('\/')->to( controller=>'Snipgen', action=>'indexPage');\n $root->any('\/Snipgen') ->to(controller=>'Snipgen', action=>'SnipgenPage1');\n $root->any('\/Snipgen\/show') ->to(controller=>'Snipgen', action=>'SnipgenPage2');\n }\n\n```\n\nand \".\/script\/snipgen.pl routes -v\" gives\n\n\n\n```\n\/ .... GET ^\n\/Snipgen .... * Snipgen ^\\\/Snipgen\n\/Snipgen\/show .... * Snipgenshow ^\\\/Snipgen\\\/show\n\n\n```\n\nbut this fails for 'http:\/\/127.0.0.1:3000\/Snipgen\/' giving page not found\n\n\n\n```\nsub load_routes {\n my($self) = @_;\n\n my $root = $self->routes;\n\n $root->get('\/')->to(controller=>'Snipgen', action=>'indexPage');\n my $myaction = $root->any('\/Snipgen')->to(controller=>'Snipgen', action=>'SnipgenPage1');\n $myaction->any('\/show') ->to(controller=>'Snipgen', action=>'SnipgenPage2');\n }\n\n```\n\nand the corresponding \".\/script\/snipgen.pl routes -v\" gives\n\n\n\n```\n\/ .... GET ^\n\/Snipgen .... * Snipgen ^\\\/Snipgen\n +\/show .... * show ^\\\/show\n\n\n```\n\nThe SnipgenPageXX subs all have 'return;' as their last line. Any idea what is going wrong?","questionMetadata":{"type":"debugging","tag":"perl","level":"intermediate"},"answer":"Mojolicious [documentation](https:\/\/docs.mojolicious.org\/Mojolicious\/Guides\/Routing#Nested-routes) states, that *A route with children can't match on its own* . You can work around this by adding a child route with `'\/'`. Another option is to use [under](https:\/\/docs.mojolicious.org\/Mojolicious\/Guides\/Routing#Under)() which offers more control by adding a dispatch target for the partial route.\nAs suggested in the comments by brian d foy, you can display the routes of your app by passing a `routes` option to your application script like so:\n`perl -Ilib script\/my_app routes -v`.\nSee the documentation [here](https:\/\/docs.mojolicious.org\/Mojolicious\/Guides\/Routing#Introspection) and [here](https:\/\/docs.mojolicious.org\/Mojolicious\/Command\/routes).\nThe Mojolicious *page not found* template also displays your routes at the top of the page.\n\n\nYou want the routes to look like this:\n\n\n\n```\n\/ .... * ^\n\/Snipgen .... * Snipgen ^\\\/Snipgen\n +\/ .... * ^\n +\/show .... * show ^\\\/show\n\n```\n\nInstead of:\n\n\n\n```\n\/ .... * ^\n\/Snipgen .... * Snipgen ^\\\/Snipgen\n +\/show .... * show ^\\\/show\n\n```\n\nAdding a child with `'\/'`:\n\n\n\n```\nsub load_routes {\n my($self) = @_;\n\n my $root = $self->routes;\n\n $root->any('\/')->to(controller=>'Snipgen', action=>'indexPage');\n my $myaction = $root->any('\/Snipgen')->to(controller => 'Snipgen');;\n $myaction->any('\/')->to( action=>'SnipgenPage1');\n $myaction->any('\/show')->to( action=>'SnipgenPage2');\n}\n\n```\n\nUsing `under()`\n\n\n\n```\nsub load_routes {\n my($self) = @_;\n\n my $root = $self->routes;\n\n $root->get('\/')->to(controller=>'Snipgen', action=>'indexPage');\n my $myaction = $root->under('\/Snipgen');#->to('Auth#check')\n $myaction->any('\/')->to(controller=>'Snipgen', action=>'SnipgenPage1');\n $myaction->any('\/show')->to(controller=>'Snipgen', action=>'SnipgenPage2');\n}\n\n```\n\nOr add a child route like so:\n\n\n\n```\nsub load_routes {\n my($self) = @_;\n my $root = $self->routes;\n \n $root->get('\/')->to(controller=>'Snipgen', action=>'indexPage');\n my $myaction = $root->get('Snipgen')\n ->to(controller=>'Snipgen', action=>'SnipgenPage1')\n ->add_child(Mojolicious::Routes::Route->new);\n $myaction->get('show')->to(controller=>'Snipgen', action=>'SnipgenPage2');\n}"}
{"questionId":"b22c523ab9c29275b41697af2e70255c18bfee229553103a514b005823a9e169","question":"Hooking syscall by modifying sys\\_call\\_table does not work\nI'm trying to do basic hooking by locating `sys_call_table` and modify an entry for `sys_read` syscall to a function in my own kernel module. I have tried kprobes I'm just interested to do it with `sys_call_table`.\n\n\nBelow is my code:\n\n\n\n```\n#include <linux\/kernel.h>\n#include <linux\/module.h>\n#include <linux\/kprobes.h>\n#include <linux\/syscalls.h>\n#include <linux\/version.h>\n\n\n\ntypedef asmlinkage long (*t_syscall)(const struct pt_regs *);\nunsigned long cr0;\nunsigned long **__sys_call_table;\ntypedef unsigned long (*kallsyms_lookup_name_t)(const char *name);\ntypedef asmlinkage int (*orig_getdents64_t)(unsigned int,\n struct linux_dirent64 *, unsigned int); \nasmlinkage long (*original_syscall)(const struct pt_regs *);\nstatic struct kprobe kp = {\n .symbol_name = \"kallsyms_lookup_name\"\n};\nstatic kallsyms_lookup_name_t kallsyms_lookup_name_ptr;\n\nstatic struct kprobe kp2 = {\n .symbol_name = \"__x64_sys_read\"\n};\n\nunsigned long *get_syscall_address(unsigned long *sys_call_table, int syscall_number);\nasmlinkage long hooked_syscall(const struct pt_regs *regs);\n\n\n#if LINUX_VERSION_CODE > KERNEL_VERSION(4, 16, 0)\nstatic inline void\nwrite_cr0_forced(unsigned long val)\n{\n unsigned long __force_order;\n\n asm volatile(\n \"mov %0, %%cr0\"\n : \"+r\"(val), \"+m\"(__force_order));\n}\n#endif\n\nstatic inline void\nunprotect_memory(void)\n{\n#if IS_ENABLED(CONFIG_X86) || IS_ENABLED(CONFIG_X86_64)\n#if LINUX_VERSION_CODE > KERNEL_VERSION(4, 16, 0)\n write_cr0_forced(cr0 & ~0x00010000);\n#else\n write_cr0(cr0 & ~0x00010000);\n#endif\n#elif IS_ENABLED(CONFIG_ARM64)\n update_mapping_prot(__pa_symbol(start_rodata), (unsigned long)start_rodata,\n section_size, PAGE_KERNEL);\n#endif\n}\n\nstatic inline void\nprotect_memory(void)\n{\n#if IS_ENABLED(CONFIG_X86) || IS_ENABLED(CONFIG_X86_64)\n#if LINUX_VERSION_CODE > KERNEL_VERSION(4, 16, 0)\n write_cr0_forced(cr0);\n#else\n write_cr0(cr0);\n#endif\n#elif IS_ENABLED(CONFIG_ARM64)\n update_mapping_prot(__pa_symbol(start_rodata), (unsigned long)start_rodata,\n section_size, PAGE_KERNEL_RO);\n\n#endif\n}\n\nasmlinkage long hooked_syscall(const struct pt_regs *regs) {\n printk(KERN_INFO \"Syscall hooked!\\n\");\n return original_syscall(regs);\n}\n\nstatic unsigned long **find_sys_call_table(void) {\n unsigned long **sct;\n sct = (unsigned long **)kallsyms_lookup_name_ptr(\"sys_call_table\");\n return sct;\n}\n\n\nstatic int __init kprobe_init(void)\n{\n int ret;\n cr0 = read_cr0();\n ret = register_kprobe(&kp);\n if (ret < 0)\n return ret;\n\n kallsyms_lookup_name_ptr = (kallsyms_lookup_name_t)kp.addr;\n\n __sys_call_table = find_sys_call_table();\n\n if (!__sys_call_table) {\n printk(KERN_ERR \"Couldn't find sys_call_table.\\n\");\n return -1;\n }\n\n printk(\"__sys_call_table address : %px\\n\", __sys_call_table);\n\n unprotect_memory();\n original_syscall = (void *)__sys_call_table[__NR_read];\n printk(\"__NR_READ : %px\\n\", original_syscall);\n printk(\"HOOKED FUNCTION : %px\\n\", (unsigned long *)hooked_syscall);\n __sys_call_table[__NR_read] = (unsigned long *)hooked_syscall;\n \n \/\/\/ Double check\n original_syscall = (void *)__sys_call_table[__NR_read];\n printk(\"__NR_READ : %px\\n\", original_syscall);\n\n protect_memory();\n\n \/\/ Extra check\n int ret2 = register_kprobe(&kp2);\n if (ret2 < 0)\n return ret2;\n\n printk(\"%px\\n\", kp2.addr);\n\n unregister_kprobe(&kp);\n unregister_kprobe(&kp2);\n\n return 0;\n}\n\nstatic void __exit kprobe_exit(void)\n{\n}\n\nmodule_init(kprobe_init)\nmodule_exit(kprobe_exit)\nMODULE_LICENSE(\"GPL\");\n\n```\n\nand the Makefile,\n\n\n\n```\n# Name of the kernel module\nobj-m += sct.o\n\n# List of source files for the module\nhello_world-objs := sct.c\n\n# Path to the kernel source tree\nKDIR := \/lib\/modules\/$(shell uname -r)\/build\n\nall:\n make -C $(KDIR) M=$(PWD) modules\n\nclean:\n make -C $(KDIR) M=$(PWD) clean\n\n```\n\nI get the address to `kallsyms_lookup_name()` by installing a kprobe and after registering it, get the `.addr` field. Once I got the address to `sys_call_table` I can read the the\naddress of `sys_read` syscall. I checked the read address by grepping `\/proc\/kallsyms` and it seems I got the right address.\nThen I change the `__NR_read` entry to a function in my lkm. I have some debug prints afterward and I can confirm that the `sys_call_table` entry has changed.\n\n\n\n```\n printk(\"__sys_call_table address : %px\\n\", __sys_call_table);\n\n unprotect_memory();\n original_syscall = (void *)__sys_call_table[__NR_read];\n printk(\"__NR_READ : %px\\n\", original_syscall);\n printk(\"HOOKED FUNCTION : %px\\n\", (unsigned long *)hooked_syscall);\n __sys_call_table[__NR_read] = (unsigned long *)hooked_syscall;\n \n \/\/\/ Double check\n original_syscall = (void *)__sys_call_table[__NR_read];\n printk(\"__NR_READ : %px\\n\", original_syscall);\n\n```\n\nUnfortunately, after modifying `sys_call_table` entry I don't get any printk showing in *dmesg*, or any crash or anything!\n\n\nTo do extra checking, I installed a kprobe on `sys_read` and got the `addr` but even after modifying `sys_call_table` the kprobe still shows the original address of `sys_read`.\n\n\nI'm on Ubuntu 24.04, `6.8.0-35-generic`. I also tried Ubuntu 22.04 but I got the same result! Both with stock kernel with default configuration. Tried in VMware VM as well as physical hardware.\n\n\nI searched a bit to see if any security mechanism might cause issues with this, but couldn't find anything :(\n\n\nIt's pretty confusing for me why my modification to `sys_call_table` doesn't seem to take effect.\n\n\nCan you please tell me what do I miss here? Is hooking `sys_call_table` a thing yet?\nI'm new and learning different Linux kernel features, I need to know if `sys_call_table` modification for hooking a syscall is still a thing or not?\n\n\nI tried to include enough information to help reproducing the same result.","questionMetadata":{"type":"version","tag":"c","level":"advanced"},"answer":"Surprise, surprise! You cannot do this anymore since Linux v6.9. Commit [1e3ad78334a69b36e107232e337f9d693dcc9df2](https:\/\/git.kernel.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux.git\/commit\/?id=1e3ad78334a69b36e107232e337f9d693dcc9df2) introduced a security mitigation against speculative execution on x86 that completely removed the use of syscall tables, which has been backported to v6.8.5+, v6.6.26+, v6.1.85+, v5.15.154+.\n\n\nUbuntu 24.04 uses the v6.8 stable branch, and Ubuntu 22.04 uses the v6.1 stable branch, so the patch is present there too. The same goes for Debian and Debian-based distros like Kali. Most major Linux distributions also incorporated this change as they simply follow the stable kernel branch.\n\n\nThe `sys_call_table` symbol still exists and still contains valid function pointers, but it is only used for tracing purposes (`CONFIG_FTRACE_SYSCALLS=y`). The actual syscall dispatch code is now implemented as a huge inlined `switch` case ([source](https:\/\/git.kernel.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux.git\/tree\/arch\/x86\/entry\/syscall_64.c?id=1e3ad78334a69b36e107232e337f9d693dcc9df2#n27)):\n\n\n\n```\n#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);\n\nlong x64_sys_call(const struct pt_regs *regs, unsigned int nr)\n{\n switch (nr) {\n #include <asm\/syscalls_64.h>\n default: return __x64_sys_ni_syscall(regs);\n }\n};\n\n```\n\n\n\n---\n\n\nI see you already mention you tried [kprobes](https:\/\/docs.kernel.org\/trace\/kprobes.html) (the real solution) so I assume you know how to use those. I'm just going to leave this here for whoever comes across this post and might find it useful. Using kprobes is significantly easier than doing things manually and in a \"dirty\" way by editing `sys_call_table`.\n\n\nIn order to find the appropriate symbol to hook you can take a look at the kernel symbols directly with `readelf -s` and grep for the syscall name you are interested in. Usually, they are prefixed with an arch-specific prefix. In case of x86 it's `__x64_sys_` for 64-bit syscalls.\n\n\nI also maintain [syscalls.mebeim.net](https:\/\/syscalls.mebeim.net) where you can find a list of syscall symbol names for various architectures and kernel versions, which you may find useful.\n\n\nHere's an example of how this could be done:\n\n\n\n```\n#include <linux\/kprobes.h>\n#include <linux\/ptrace.h>\n\/\/ ...\n\nstatic int sys_read_kprobe_pre_handler(struct kprobe *p, struct pt_regs *regs)\n{\n \/\/ Do something here...\n return 0;\n}\n\nstruct kprobe syscall_kprobe = {\n .symbol_name = \"__x64_sys_read\",\n .pre_handler = sys_read_kprobe_pre_handler,\n};\n\nstatic int __init my_module_init(void)\n{\n int err;\n\n err = register_kprobe(&syscall_kprobe);\n if (err) {\n pr_err(\"register_kprobe() failed: %d\\n\", err);\n return err;\n }\n\n return 0;\n}\n\nstatic void __exit my_module_exit(void)\n{\n unregister_kprobe(&syscall_kprobe);\n}\n\n```\n\nNote: I did not test the above code so don't expect it to run perfectly as is, but you can use it as a starting point.\n\n\n**Important:** remember that the `.pre_handler` kprobe will get the *kernel registers* in the `struct pt_regs` that is passed as second argument, *not the userspace registers*. You will have to get the `struct pt_regs` holding userspace registers from the register holding the first function argument (this will be different depending on architecture, on x86 it's `regs->di` for RDI). There are also special cases where the syscall is not defined with a `SYSCALL_DEFINEn` macro and the userspace arguments will not be passed as a `struct pt_regs`. You should check kernel sources of whichever syscall you are trying to hook. FWIF, I host a syscall table reference [here](https:\/\/syscalls.mebeim.net) that can help."}
{"questionId":"c0e5ea35d07028dbacf6fe27f63a31526a5fbe090c9a515d6baaffd5d98907be","question":"Generating a pseudorandom binary sequence where the same number does not occur more than twice in a row\nI want to be able to generate a (pseudo)random binary sequence (for example, a sequence of Rs and Ls like RLLRRLR) which is counterbalanced such that the same item does not occur more than twice in a row in the sequence and if I, for example, have a sequence of 20, I get 10 of each. Is there a function in R that can do something like this?\n\n\nI tried to write a function like this myself. Here is the attempt:\n\n\n\n```\nRL_seq <- function(n_L = 10, n_R = 10, max_consec = 2, initial_seq = NULL) {\n while(n_L > 0 | n_R > 0){\n side <- sample(c(\"R\", \"L\"), 1)\n \n if (side == \"R\" & n_R > 0 & length(grep(\"R\", tail(initial_seq, max_consec))) != max_consec) {\n initial_seq <- append(initial_seq, side)\n n_R <- n_R - 1\n } else if (side == \"L\" & n_L > 0 & length(grep(\"L\", tail(initial_seq, max_consec))) != max_consec) {\n initial_seq <- append(initial_seq, side)\n n_L <- n_L - 1\n }\n }\n print(initial_seq)\n}\n\n# The function does not stop with the following seed\nset.seed(1)\nRL_seq()\n\n```\n\nHowever, it's up to chance whether the code gets stuck or not. I was also hoping that I could change the rules for the sequence (for example, allowing for 3 consecutive Rs), but the code tends to breaks if I touch the arguments. At this point I would be happy if I could run it with the default arguments and not have it get stuck.\n\n\nI have searched around but I cannot find an answer.","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"Here, I first determine the mix of elements, then repeatedly shuffle until there's a draw without too many consecutive picks. I'm sure there's a more computationally efficient approach (e.g. where we don't overwrite `seq` every iteration), but this might suffice. For example, it takes ~0.002 sec for `max_consec = 2`, or a few seconds to find a draw with no repeats (`max_consec = 1`), even if that takes 100k+ draws to find.\n\n\nAny approach (like the two suggested so far) which relies on random draws will perform poorly for larger sequences, since it will become vanishingly unlikely to happen upon a sequence with few enough repeated strings by chance.\n\n\n\n```\nRL_seq <- function(n_L = 10, n_R = 10, max_consec = 2) {\n # make a sequence that is all the Ls then all the Rs\n seq = c(rep(\"L\", n_L), rep(\"R\", n_R))\n consec = max_consec + 1 # we know the first attempt won't work\n while(consec > max_consec) {\n # overwrite the sequence with a shuffle of it\n seq = sample(seq, length(seq), replace = FALSE)\n # what is the maximum \"run length encoding\" (rle) length of the sequence?\n consec = max(rle(seq)[1]$lengths)\n }\n seq\n}\n\n\nRL_seq()"}
{"questionId":"aeafd4b3a79533b1a470a86ab83e777f66b7d80be143ca89e082b9496d01d4f8","question":"Deadlock on static-initialized jthread calling std::stacktrace\\_entry::description\nThe below code results in a deadlock upon exiting `main()`\n\n\n\n```\n#include <stacktrace>\n#include <iostream>\n#include <thread>\n#include <semaphore>\n#include <chrono>\n\nusing namespace std::chrono_literals;\n\nstruct Singleton\n{\n Singleton()\n {\n worker = std::jthread{ [this] {\n sema.acquire();\n for (auto& e : trace) {\n std::this_thread::sleep_for(50ms);\n std::cout << e.description() << std::endl;\n }\n } };\n }\n std::binary_semaphore sema{ 0 };\n std::stacktrace trace;\n std::jthread worker;\n};\n\nstd::stacktrace g()\n{\n return std::stacktrace::current();\n}\n\nstd::stacktrace f()\n{\n return g();\n}\n\nSingleton& get()\n{\n static Singleton sing;\n return sing;\n}\n\nint main(int argc, char** argv) {\n get().trace = f();\n get().sema.release();\n std::this_thread::sleep_for(350ms);\n return 0;\n}\n\n```\n\nSpecifically, calling `description()` seems to cause a deadlock in some CRT code trying to acquire a critical section.\n\n\nI hypothesize that `description()` calls into some CRT code which depends on a global object in the CRT managed by a critical section. Either that object is destroyed upon exiting main before the `jthread` destructor is called, or upon exiting main the same critical section is being entered.\n\n\nIf this code is somehow undefined behavior, I would be grateful for someone to point out exactly what aspect of this usage is UB.\n\n\nFor context, this is a minimal reproduction of a problem existing in a much larger codebase, where a logging channel object containing a worker thread and lock-free queue is being managed as a singleton.\n\n\nEdit: note the `sleep_for` calls are purely for illustrative purposes, they are not essential nor are they an attempt to fix a race condition. The code exhibits the same deadlocking behavior if they are removed.","questionMetadata":{"type":"debugging","tag":"c++","level":"advanced"},"answer":"I think this bug is due to that `atexit` functions, including local static object destructors are called under the same lock, that these functions are enumerated. This would end up in deadlock.\n\n\nI don't think there are some words about this situation in the standard. It just doesn't mention any such limitations. I would have expected them mentioned around [`[support.start.term]\/6`](https:\/\/eel.is\/c++draft\/support.start.term#6) or [`[basic.start.term]\/5`](https:\/\/eel.is\/c++draft\/basic.exec#basic.start.term-5). So I assume it is a bug in CRT.\n\n\nA permanent clean fix in CRT would be to avoid calling user code from within a lock (both `stacktrace` machinery code and your singleton destructor code is user code in this regard). Either by unlocking before calling the user's code or using a lock-free list. But unfortunately, this would have severe performance impact, at least with a naive implementation (could like grab all registered functions at once under the lock, call them, repeat to see if there are more, this would still minimize the lock usage). Maybe that's why this wasn't fixed yet.\n\n\nSuggest searching\/reporting to <https:\/\/developercommunity.visualstudio.com\/> to see if it can be fixed.\n\n\n\n\n---\n\n\nAs a workaround, I propose to use `stacktrace` machinery once before your singleton is constructed. It will then be initialized earlier, so will destroy later. Note that `current` is lightweight, and doesn't need global objects, you'll have to use some of string conversion functions.\n\n\n\n```\nstruct Singleton\n{\n Singleton()\n {\n \/\/ Instantiate stacktrace machinery to prevent deadlock\n (void) std::to_string(std::stacktrace::current()); \n\n worker = std::jthread{ [this] {\n sema.acquire();\n for (auto& e : trace) {\n std::this_thread::sleep_for(50ms);\n std::cout << e.description() << std::endl;\n }\n } };\n }\n std::binary_semaphore sema{ 0 };\n std::stacktrace trace;\n std::jthread worker;\n};"}
{"questionId":"13d66d8fe53f07d36406a1f0d9e1083bbf3b3c77cfc9b631be2a0645677b8d95","question":"What causes this bug in mvtnorm package in R?\nI am conducting some nonlinear optimization using multivariate probabilities as my objective function. I've spent hours thinking I had issue with the optimization algorithm, but actually I've tracked the bug down to the use of the `mvtnorm` package.\n\n\n## Code\n\n\n\n```\nlibrary(mvtnorm)\npmvnorm(\n lower = c(-1.281552 , 7.089083, 0.5193308),\n upper = c(-1.200552, Inf, Inf),\n corr = diag(1, nrow =3)\n) \n\n```\n\nIf you excecute this code, every 10-20 times it will return `NaN` as opposed to `3.047822e-15`.\n\n\nWhy is this the case? Can anyone enlighten me? Also, is there an alternative to `mvtnorm::pmvnorm()` that will prevent this type of instability?\n\n\nEdit:\n\n\n- according to @LMc in comments you can reproduce the error using seed = 10L as an example\n- @PBull gives a great answer below in the comments\n- @Gregor Thomas in comments describes that using Miwa() fixes it (though Miwa I believe approximates -Inf\/+Inf to -1e4\/1e4\n\n\n## Session info\n\n\n\n> \n> R version 4.3.1 (2023-06-16)\n> Platform: x86\\_64-pc-linux-gnu (64-bit)\n> Running under: Red Hat Enterprise Linux\n> \n> \n> Matrix products: default\n> BLAS: \/usr\/lib64\/libblas.so.3.4.2\n> LAPACK: \/usr\/lib64\/liblapack.so.3.4.2\n> \n> \n> locale:\n> [1] LC\\_CTYPE=en\\_US.UTF-8 LC\\_NUMERIC=C LC\\_TIME=en\\_US.UTF-8 LC\\_COLLATE=en\\_US.UTF-8 \n> \n> [5] LC\\_MONETARY=en\\_US.UTF-8 LC\\_MESSAGES=en\\_US.UTF-8 LC\\_PAPER=en\\_US.UTF-8 LC\\_NAME=C \n> \n> [9] LC\\_ADDRESS=C LC\\_TELEPHONE=C LC\\_MEASUREMENT=en\\_US.UTF-8 LC\\_IDENTIFICATION=C\n> \n> \n> time zone: America\/New\\_York\n> tzcode source: system (glibc)\n> \n> \n> attached base packages:\n> [1] stats graphics grDevices utils datasets methods base\n> \n> \n> other attached packages:\n> [1] mvtnorm\\_1.2-4\n> \n> \n> loaded via a namespace (and not attached):\n> [1] compiler\\_4.3.1 tools\\_4.3.1 rstudioapi\\_0.16.0\n> \n> \n>","questionMetadata":{"type":"debugging","tag":"r","level":"advanced"},"answer":"I can't claim to know much about the Genz-Bretz algorithm, but I recently ported [`mvt.f`](https:\/\/github.com\/cran\/mvtnorm\/blob\/master\/src\/mvt.f) to C\/SAS so I've been staring at its code for quite some time.\n\n\nThe result of this routine depends on pseudo-random number generation through Monte Carlo lattice scrambling -- whatever that means -- hence your issue only occurs for certain seeds. Very specifically [this](https:\/\/github.com\/cran\/mvtnorm\/blob\/126b767dcdb436ef00f9cdcd5b26a99c44302fee\/src\/mvt.f#L251) is the offending line:\n\n\n\n```\nY(ND) = MVPHNV( DI + W(ND)*( EI - DI ) )\n\n```\n\n`MVPHNV` is actually a [wrapper](https:\/\/github.com\/cran\/mvtnorm\/blob\/master\/src\/C_FORTRAN_interface.c) for R's `qnorm` inverse normal CDF function. Because you start the integration in one of your variates from a pretty extreme value the remaining density is already dangerously close to machine epsilon:\n\n\n\n```\nprt <- function(x, d=30) sprintf(paste0(\"%.\", d, \"f\"), x)\n\np <- pnorm(7.089083)\n#> 1\n\n## Not *exactly* 1 however [see also pnorm(..., lower.tail = FALSE)]\nprt(p)\n#> \"0.999999999999324984401027904823\"\n\n```\n\nThe Genz-Bretz algorithm perturbs these variates through `W(ND)`, `DI` and `EI` above, which in some cases causes it to veer even closer to 1. Eventually the following happens:\n\n\n\n```\nqnorm(1) ## Or within machine epsilon of 1\n#> Inf\n\n```\n\nThis isn't handled downstream, and a few additions\/multiplications later the result becomes `NaN` from which it never recovers.\n\n\nNow, this is indeed a bug in `mvtnorm` since it doesn't address this edge case correctly, but the underlying issue is that you're trying to do calculations for which the accuracy might be questionable to start with. The \"true\" answer is within an order of magnitude above machine epsilon, and it having an error of zero is most likely an underflow in the first place. You really can't go much further out into the tails either before you run into problems in general:\n\n\n\n```\nprt(pnorm(8.17)) ## Still not exactly 1\n#> 0.999999999999999888977697537484\n\npnorm(8.17) == pnorm(8.29) ## But 8.17 == 8.29?!\n#> TRUE\n\nprt(pnorm(8.30))\n#> 1.000000000000000000000000000000\n\n```\n\nThe calculation with `lower.tail = FALSE` is a *little* bit more accurate here but will be worse in the other direction. This is also the reason why `mvtnorm::Miwa` won't integrate out to infinity in your case; computers simply don't have infinite precision."}
{"questionId":"73a3edecb5348da56694bd94f4bca4a236f87196f4603479ccfda8c395bbda2b","question":"Branchless count-leading-zeros on 32-bit RISC-V without Zbb extension\nThe context of this question is the creation of a side-channel resistant implementation of a IEEE-754 compliant single-precision square root for a 32-bit RISC-V platform without hardware support for floating-point arithmetic and without the Zbb extension for advanced bit manipulation. Integer multiplies, in particular the `MUL` and `MULHU` instructions, *are* supported by the hardware and can be assumed to have fixed latency. Counting the leading zero bits is required for normalization of subnormal operands, and the `CLZ` emulation should be branchless because of the side-channel resistant design.\n\n\nI started with C99 code for a 32-bit leading-zero count that I used twenty years ago on ARMv4t processors. This is a *full-range* implementation, i.e. it returns 32 for an input of zero.\n\n\n\n```\nuint32_t cntlz (uint32_t a)\n{ \n uint32_t n = 0;\n#if 0\n n = __builtin_clz (a);\n#else\n n = !a + 1;\n if (a < 0x00010000u) { n |= 16; a <<= 16; }\n if (a < 0x01000000u) { n |= 8; a <<= 8; }\n if (a < 0x10000000u) { n |= 4; a <<= 4; }\n if (a < 0x40000000u) { n += 2; a <<= 2; }\n n = n - (a >> 31);\n#endif\n return n;\n}\n\n```\n\nAs a *sanity check*, I compiled the above source with clang 18.1 `-marm -march=armv4t`, resulting in the following code that, at 16 instruction without function return, uses one instruction more than the best ARMv4t implementation I am aware of (which uses 15 instructions without the function return):\n\n\n\n```\ncntlz:\n mov r1, #1\n cmp r0, #0\n moveq r1, #2\n cmp r0, #65536\n lsllo r0, r0, #16\n orrlo r1, r1, #16\n cmp r0, #16777216\n lsllo r0, r0, #8\n orrlo r1, r1, #8\n cmp r0, #268435456\n lsllo r0, r0, #4\n orrlo r1, r1, #4\n cmp r0, #1073741824\n addlo r1, r1, #2\n lsllo r0, r0, #2\n add r0, r1, r0, asr #31\n bx lr\n\n```\n\nI am currently working without access to a RISC-V development platform and used Compiler Explorer to compile for a 32-bit RISC-V target. I could not figure out how to specify extensions properly to turn off floating-point support, so I used clang 18.1 with `-march=rv32gc`, which resulted in the following assembly code being generated:\n\n\n\n```\ncntlz: # @cntlz\n seqz a1, a0\n srli a2, a0, 16\n seqz a2, a2\n slli a2, a2, 4\n or a1, a1, a2\n sll a0, a0, a2\n srli a2, a0, 24\n seqz a2, a2\n slli a2, a2, 3\n or a1, a1, a2\n sll a0, a0, a2\n srli a2, a0, 28\n seqz a2, a2\n slli a2, a2, 2\n or a1, a1, a2\n sll a0, a0, a2\n srli a2, a0, 30\n seqz a2, a2\n slli a2, a2, 1\n or a1, a1, a2\n sll a0, a0, a2\n srai a0, a0, 31\n add a0, a0, a1\n addi a0, a0, 1 \n ret\n\n```\n\nI am unable to identify any improvements to the code generated by Clang, that is, it appears to be as tight as possible. I am aware that RISC-V implementations could implement macro-op fusion. See: Christopher Celio, et al., \"The Renewed Case for the Reduced Instruction Set Computer:\nAvoiding ISA Bloat with Macro-Op Fusion for RISC-V\", UC Berkeley technical report EECS-2016-130. But none of the fusion idioms discussed in the report appear to apply to this code, leading me to assume an execution time of 24 cycles for this 24 instruction sequence (without the function return). I was curious what `__builtin_clz()` resolves to. Compiling with that code path enabled results in a 31-instruction sequence that converts the leading zeros into a left-justified mask of 1-bits and then applies a population count computation to the mask:\n\n\n\n```\n srli a1, a0, 1\n or a0, a0, a1\n srli a1, a0, 2\n or a0, a0, a1\n srli a1, a0, 4\n or a0, a0, a1\n srli a1, a0, 8\n or a0, a0, a1\n srli a1, a0, 16\n or a0, a0, a1\n not a0, a0 \/\/ a0 now left-aligned mask of 1-bits\n srli a1, a0, 1\n lui a2, 349525\n addi a2, a2, 1365\n and a1, a1, a2\n sub a0, a0, a1\n lui a1, 209715\n addi a1, a1, 819\n and a2, a0, a1\n srli a0, a0, 2\n and a0, a0, a1\n add a0, a0, a2\n srli a1, a0, 4\n add a0, a0, a1\n lui a1, 61681\n addi a1, a1, -241\n and a0, a0, a1\n lui a1, 4112\n addi a1, a1, 257\n mul a0, a0, a1\n srli a0, a0, 24\n ret\n\n```\n\nAgain, I am not sure what instructions could be subject to macro-op fusion here, but the most likely candidate seems to be the `LUI`\/`ADDI` idiom used to load 32-bit immediate data, similar to the way modern ARM processors fuse `MOVW`\/`MOVT` pairs. With that assumption, the code would still appear to be slower than what I currently have. I tried half a dozen additional integer-based variants of 32-bit `CLZ` emulation and did not find any that resulted in fewer than 24 instructions. I also searched the internet and was unable to find anything superior to my current code.\n\n\nAre there any *branchless* full-range implementations of leading-zero count for 32-bit RISC-V platforms that require fewer than 24 cycles? Conservatively, I want to assume the absence of macro-op fusion as this seems like an expensive feature in a low-end microcontroller, but answers relying on macro-op fusion as present in existing RISC-V implementations are also welcome.\n\n\n**Note:** Table-based methods are not suitable in this context as table access could trigger cache misses which can be exploited for side-channel attacks.\n\n\n**Update 6\/9\/2024**: After working on this issue for another 1.5 days, I found a variant that should reduce the number of instructions required from 24 to 22, however, not all compilers can actually deliver the desired code.\n\n\nThe basic observation is that the RISC-V ISA is not orthogonal with regard to `SET`-type instructions, in that it only supports a \"less than\" flavor. Other comparisons may require inversion of the comparison followed by *inversion of the result*, for example by XOR-ing 1, or applying `SEQZ`, adding one instruction per instance. My idea now is to avoid this per-instance inversion by transposing a positive operand into a negative one *once*, allowing the direct use of \"less than\" comparisons. Expressed in portable C++11 code, and annotated with the RISC-V instructions I expect to be generated:\n\n\n\n```\n\/\/ https:\/\/stackoverflow.com\/a\/74563384\/780717\n\/\/ Only applies right shift to non-negative values to avoid implementation-defined behavior\nint32_t sra (int32_t x, int32_t y)\n{\n return (x < 0) ? (~(~x >> y)) : (x >> y);\n}\n\nuint32_t cntlz_rv32 (uint32_t a)\n{\n uint32_t n, t;\n int32_t as;\n n = !a; \/\/ 1: seqz\n t = ((a >> 16)!=0)*16; n = n - t; a = a >> t; \/\/ 5: srli, snez, slli, sub, srl\n as = (int32_t)(~a); \/\/ 1: not\n t = (as < -256) * 8; n = n - t; as = sra (as, t); \/\/ 4: slti, slli, sub, sra\n t = (as < -16) * 4; n = n - t; as = sra (as, t); \/\/ 4: slti, slli, sub, sra\n t = (as < -4) * 2; n = n - t; as = sra (as, t); \/\/ 4: slti, slli, sub, sra\n t = (as < -2) * 1; n = n - t; \/\/ 2: slti, sub\n n += 31; \/\/ 1: addi\n return n; \/\/ 22 instructions total w\/o ret\n}\n\n```\n\nWith gcc 13.3 (**not** 14.1), the following branchless 22-instruction sequence (not counting the `ret`) is generated:\n\n\n\n```\ncntlz_rv32(unsigned int):\n srli a3,a0,16\n snez a3,a3\n slli a3,a3,4\n srl a4,a0,a3\n not a4,a4\n slti a1,a4,-256\n slli a1,a1,3\n sra a4,a4,a1\n slti a2,a4,-16\n slli a2,a2,2\n seqz a5,a0\n addi a5,a5,31\n sra a0,a4,a2\n slti a4,a0,-4\n sub a5,a5,a3\n slli a4,a4,1\n sub a5,a5,a1\n sub a5,a5,a2\n sra a0,a0,a4\n sub a5,a5,a4\n slti a0,a0,-2\n sub a0,a5,a0\n ret\n\n```\n\nFor reasons I do not understand, gcc 14.1 refuses to generate the expected `SLTI` with `-256`. By moving the comparison prior to the application of the 1's complement it then requires the addition of an `SEQZ` to invert the result of the comparison. clang 18.1 generates an inefficient 33-instruction sequence for the above source code for no reason that I can discern.","questionMetadata":{"type":"implementation","tag":"c","level":"advanced"},"answer":"After some more thinking, I invented hybrid approach, when long shifts by 16\/8\/4 performed as in the 1st version (binary searches), and instead of last two ones, performed \"table lookup\", when table - just 32-bit word. I not sure, is this better than two previous approaches, but try to check it, also:\n\n\n\n```\nuint32_t myclz2(uint32_t x) {\n int r = !x, c;\n c = (x < 0x00010000) << 4;\n r += c; x <<= c; \/\/ off 16\n c = (x < 0x01000000) << 3;\n r += c; x <<= c; \/\/ off 8\n c = (x < 0x10000000) << 2;\n r += c; x <<= c; \/\/ off 4\n c = (x >> (32 - 4 - 1)) & 0x1e;\n r += (0x55af >> c) & 3;\n return r;\n}"}
{"questionId":"a03d71c6fc54e95ebd53494ace4b422e9e712098fd935c8249e3b65da3745f6e","question":"Gem::Ext::BuildError: ERROR: Failed to build gem native extension unicode.c:1058:20: [-Wincompatible-function-pointer-types]\nSome details about my environment:\n\n\n- Ruby version: ruby 3.2.0\n- RubyGems version: 3.5.6\n- rbenv version: 1.2.0\n- MacOS: Sonoma 14.1.2 \/ M2\n\n\nI'm trying to run **bundle install** to install the gems for my project but I'm getting the following error:\n\n\n\n```\nInstalling unicode 0.4.4.4 with native extensions\nGem::Ext::BuildError: ERROR: Failed to build gem native extension.\n\n current directory: \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/unicode-0.4.4.4\/ext\/unicode\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/bin\/ruby -I \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0 extconf.rb --with-cflags\\=-Wno-error\\=implicit-function-declaration\ncreating Makefile\n\ncurrent directory: \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/unicode-0.4.4.4\/ext\/unicode\nmake DESTDIR\\= sitearchdir\\=.\/.gem.20240308-48017-ak1bt1 sitelibdir\\=.\/.gem.20240308-48017-ak1bt1 clean\n\ncurrent directory: \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/unicode-0.4.4.4\/ext\/unicode\nmake DESTDIR\\= sitearchdir\\=.\/.gem.20240308-48017-ak1bt1 sitelibdir\\=.\/.gem.20240308-48017-ak1bt1\ncompiling unicode.c\nunicode.c:37:7: warning: 'RB_OBJ_TAINTED' is deprecated: taintedness turned out to be a wrong idea. [-Wdeprecated-declarations]\n if (OBJ_TAINTED(src))\n ^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/fl_type.h:151:25: note: expanded from macro 'OBJ_TAINTED'\n#define OBJ_TAINTED RB_OBJ_TAINTED \/**< @old{RB_OBJ_TAINTED} *\/\n ^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/fl_type.h:118:30: note: expanded from macro 'RB_OBJ_TAINTED'\n#define RB_OBJ_TAINTED RB_OBJ_TAINTED\n ^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/fl_type.h:812:1: note: 'RB_OBJ_TAINTED' has been explicitly marked deprecated here\nRBIMPL_ATTR_DEPRECATED((\"taintedness turned out to be a wrong idea.\"))\n^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/attr\/deprecated.h:36:53: note: expanded from macro 'RBIMPL_ATTR_DEPRECATED'\n# define RBIMPL_ATTR_DEPRECATED(msg) __attribute__((__deprecated__ msg))\n ^\nunicode.c:38:5: warning: 'RB_OBJ_TAINT' is deprecated: taintedness turned out to be a wrong idea. [-Wdeprecated-declarations]\n OBJ_TAINT(obj);\n ^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/fl_type.h:149:25: note: expanded from macro 'OBJ_TAINT'\n#define OBJ_TAINT RB_OBJ_TAINT \/**< @old{RB_OBJ_TAINT} *\/\n ^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/fl_type.h:116:30: note: expanded from macro 'RB_OBJ_TAINT'\n#define RB_OBJ_TAINT RB_OBJ_TAINT\n ^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/fl_type.h:843:1: note: 'RB_OBJ_TAINT' has been explicitly marked deprecated here\nRBIMPL_ATTR_DEPRECATED((\"taintedness turned out to be a wrong idea.\"))\n^\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/attr\/deprecated.h:36:53: note: expanded from macro 'RBIMPL_ATTR_DEPRECATED'\n# define RBIMPL_ATTR_DEPRECATED(msg) __attribute__((__deprecated__ msg))\n ^\nunicode.c:1039:20: error: incompatible function pointer types passing 'VALUE (get_categories_param *)' (aka 'unsigned long (struct _get_categories_param *)') to parameter of type 'VALUE (*)(VALUE)' (aka 'unsigned long (*)(unsigned long)') [-Wincompatible-function-pointer-types]\n return rb_ensure(get_categories_internal, (VALUE)¶m,\n ^~~~~~~~~~~~~~~~~~~~~~~\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/iterator.h:425:25: note: passing argument to parameter 'b_proc' here\nVALUE rb_ensure(VALUE (*b_proc)(VALUE), VALUE data1, VALUE (*e_proc)(VALUE), VALUE data2);\n ^\nunicode.c:1040:20: error: incompatible function pointer types passing 'VALUE (WString *)' (aka 'unsigned long (struct _WString *)') to parameter of type 'VALUE (*)(VALUE)' (aka 'unsigned long (*)(unsigned long)') [-Wincompatible-function-pointer-types]\n get_categories_ensure, (VALUE)&wstr);\n ^~~~~~~~~~~~~~~~~~~~~\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/iterator.h:425:62: note: passing argument to parameter 'e_proc' here\nVALUE rb_ensure(VALUE (*b_proc)(VALUE), VALUE data1, VALUE (*e_proc)(VALUE), VALUE data2);\n ^\nunicode.c:1057:20: error: incompatible function pointer types passing 'VALUE (get_categories_param *)' (aka 'unsigned long (struct _get_categories_param *)') to parameter of type 'VALUE (*)(VALUE)' (aka 'unsigned long (*)(unsigned long)') [-Wincompatible-function-pointer-types]\n return rb_ensure(get_categories_internal, (VALUE)¶m,\n ^~~~~~~~~~~~~~~~~~~~~~~\n\/Users\/john.doe\/.rbenv\/versions\/3.2.0\/include\/ruby-3.2.0\/ruby\/internal\/iterator.h:425:25: note: passing argument to parameter 'b_proc' here\nVALUE rb_ensure(VALUE (*b_proc)(VALUE), VALUE data1, VALUE (*e_proc)(VALUE), VALUE data2);\n ^\nunicode.c:1058:20: error: incompatible function pointer types passing 'VALUE (WString *)' (aka 'unsigned long (struct _WString *)') to parameter of type 'VALUE (*)(VALUE)' (aka 'unsigned long (*)(unsigned long)') [-Wincompatible-function-pointer-types]\n get_categories_ensure, (VALUE)&wstr);\n ^~~~~~~~~~~~~~~~~~~~~\n.\n.\n.\n2 warnings and 6 errors generated.\nmake: *** [unicode.o] Error 1\n\nmake failed, exit code 2\n\nGem files will remain installed in \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/unicode-0.4.4.4 for inspection.\nResults logged to \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/extensions\/arm64-darwin-23\/3.2.0\/unicode-0.4.4.4\/gem_make.out\n\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:102:in `run'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:51:in `block in make'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:43:in `each'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:43:in `make'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/ext_conf_builder.rb:42:in `build'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:170:in `build_extension'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:204:in `block in build_extensions'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:201:in `each'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/ext\/builder.rb:201:in `build_extensions'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/site_ruby\/3.2.0\/rubygems\/installer.rb:843:in `build_extensions'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/bundler-2.5.6\/lib\/bundler\/rubygems_gem_installer.rb:76:in `build_extensions'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/bundler-2.5.6\/lib\/bundler\/rubygems_gem_installer.rb:28:in `install'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/bundler-2.5.6\/lib\/bundler\/source\/rubygems.rb:205:in `install'\n \/Users\/john.doe\/.rbenv\/versions\/3.2.0\/lib\/ruby\/gems\/3.2.0\/gems\/bundler-.\n.\n.\nAn error occurred while installing unicode (0.4.4.4), and Bundler cannot continue.\n\n```\n\nI would really appreciate any help that you can provide.\nThanks.\n\n\n- I tried \"xcode-select --install\" to check if there was something missing\n- I've tried with asdf version manager to check if the issue was related to rbenv","questionMetadata":{"type":"debugging","tag":"ruby","level":"intermediate"},"answer":"Use this:\n\n\ngem install unicode -- --with-cflags=\"-Wno-incompatible-function-pointer-types\""}
{"questionId":"ed9d0dcecd983967b4ff3bfab8f0c60c2f9bf169fa99f74751139433ddc2edef","question":"How \"auto lift\" is implemented in Haskell?\nI'm new to Haskell and learning about Monad Transformer. I found that `lift` can be omitted when operating on an inner monad in a monad stack. For example:\n\n\n\n```\ntype Foo = ReaderT String (WriterT String Identity) Int\n\nfoo :: Int -> Foo\nfoo x = do\n env <- ask\n tell $ env ++ \"in foo\" -- actually, it should be `lift $ tell $ env ++ \"in foo\"` intuitively\n return x\n\n```\n\nI think maybe it's not a language feature but an ad-hoc extension of `ghc`(right?). And I wonder how this is implemented.","questionMetadata":{"type":"conceptual","tag":"haskell","level":"intermediate"},"answer":"The signature of `tell` is [**`tell :: MonadWriter w m => w -> m ()`**\u00a0[Hackage]](https:\/\/hackage.haskell.org\/package\/mtl-2.3.1\/docs\/Control-Monad-Writer-CPS.html#v:tell).\n\n\nIt thus is implemented for any `m` that is a member of the `MonadWriter`, and one of these instances is:\n\n\n\n```\ninstance **MonadWriter** w **m** => MonadWriter w (**ReaderT r m**) where\n -- \u2026\n```\n\nIf the inner monad of a `ReaderT` thus is a `MonadWriter` (and `WriterT` offers that), then `ReaderT` is also a member of the `MonadWriter`. [It implements, as you probably figured out yourself, `tell` as\u00a0[Haskell-src]](https:\/\/hackage.haskell.org\/package\/mtl-2.3.1\/docs\/src\/Control.Monad.Writer.Class.html#line-173):\n\n\n\n> \n> ```\n> instance MonadWriter w m => MonadWriter w (ReaderT r m) where\n> writer = lift . writer\n> **tell = lift . tell**\n> listen = mapReaderT listen\n> pass = mapReaderT pass\n> ```\n> \n\n\nso it performs a `lift` itself, just because of the `instance` Haskell will pick for it. The idea is that for each monad transformer a person defines, where lifting is possible, you could implement this to make it more transparent where `tell` will be \"routed\" to."}
{"questionId":"3d392842e5c9080e25f1c980f8eed85432f8cc1a26e5d840e6e53a1de071e6c4","question":"Is a non-constant zero integer cast to `void \\*` still a null pointer?\nThe expresssion `(void *)0` is called a null pointer.\nBut how about the following:\n\n\n\n```\nint i = 0;\nvoid *s = (void *)i;\n\n```\n\nIs `s` also a null-pointer? The C-language standard says:\n\n\n\n> \n> #### 6.3.2.3 Pointers\n> \n> \n> 3 An integer constant expression with the value 0, such an expression cast to type void \\*, or the predefined constant nullptr is called a null pointer constant70). If a null pointer constant or a value of the type nullptr\\_t (which is necessarily the value nullptr) is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.\n> \n> \n> 4 Conversion of a null pointer to another pointer type yields a null\n> pointer of that type. Any two null pointers shall compare equal.\n> \n> \n> 5 An integer may be converted to any pointer type. Except as previously specified, the result is implementation-defined, might not be correctly aligned, might not point to an entity of the referenced type, and might produce an indeterminate representation when stored into an object.71)\n> \n> \n> \n\n\nAccording to this `s` would not be a null pointer?","questionMetadata":{"type":"conceptual","tag":"c","level":"intermediate"},"answer":"It will be on any typical modern system, but it doesn't have to be. Nothing in the standard requires pointer casts to behave the same for constant expressions and non-constant expressions.\n\n\nThe standard rationale document makes this explicit. From the [C99 Rationale](https:\/\/www.open-std.org\/jtc1\/sc22\/wg14\/www\/C99RationaleV5.10.pdf), Revision 5.10, 6.3.2.3:\n\n\n\n> \n> Since pointers and integers are now considered incommensurate, the only integer value that can\n> be safely converted to a pointer is a constant expression with the value 0. The result of\n> converting any other integer value, **including a non-constant expression with the value 0**, to a\n> pointer is implementation-defined.\n> \n> \n>"}
{"questionId":"507bdd193959dac581eab840a961dc62533b464b6a6ae6011fdf99ef9f395e97","question":"After upgrade to Gitlab-ee 17, there are problems\nMy box is Centos7, and by using 'yum install -y gitlab-ee' to upgrade gitlab-ee. After Gitlab-EE 17 upgraded, there are problems:\n\n\n1. Gitlab runner cannot be picked\n2. in Gitlab server, there is \"Current Status: Unhealthy\", via Admin Area -> Health Check\n\n\n\n```\nMigrations are pending. To resolve this issue, run: bin\/rails db:migrate RAILS_ENV=production You have 103 pending migrations: \n20240205170838_change_approval_merge_request_rules_vulnerability_states_default.rb \n20240205171942_change_approval_project_rules_vulnerability_states_default.rb \n20240325131114_move_self_managed_cr_to_instance.rb \n20240328032449_drop_merge_request_diff_llm_summary_table.rb \n20240402143848_queue_backfill_jira_tracker_data_project_keys.rb \n20240403005214_add_concurrent_index_merge_requests_for_latest_diffs_with_state_merged.rb \n20240403005435_add_concurrent_index_on_merge_request_diffs_head_commit_sha.rb \n20240403104306_add_tmp_backfill_index_for_pipeline_ids_to_vulnerability_occurrences.rb \n20240408135326_remove_foreign_keys_from_external_approval_rules_protected_branches.rb \n20240408135652_drop_external_approval_rules_protected_branches_table.rb \n20240409070036_sync_index_for_pipelines_unique_id_bigint.rb \n20240410070036_prepare_async_indexes_for_pipelines_id_bigint.rb \n20240410104838_index_vulnerability_reads_on_state_report_type_severity_traversal_ids_archived.rb \n20240411070036_async_fk_id_bigint4_ci_pipelines_p_ci_builds_ci_pipelines_config_p_ci_stages.rb \n20240412000002_prepare_async_index_for_builds_part5.rb \n20240412125902_sync_index_for_p_ci_builds_part4.rb \n20240415122603_remove_inputs_from_catalog_resource_components.rb \n20240415130318_migrate_application_settings_help_text.rb \n20240416005004_swap_columns_for_p_ci_builds_runner_id.rb \n20240416094040_drop_ci_partition_sequence.rb \n20240416103114_ensure_backfill_packages_build_infos_pipeline_id_convert_to_bigint_is_completed.rb \n20240416103210_create_indexes_for_packages_build_infos_pipeline_id_convert_to_bigint.rb \n20240416110447_ensure_backfill_merge_trains_pipeline_id_convert_to_bigint_is_completed.rb \n20240416110522_create_indexes_for_merge_trains_pipeline_id_convert_to_bigint.rb \n20240416111927_ensure_backfill_vulnerability_feedback_id_convert_to_bigint_is_completed.rb \n20240416112009_create_index_for_vulnerability_feedback_pipeline_id_convert_to_bigint.rb \n20240416144510_migrate_container_protection_rules_minimum_access_level.rb \n20240416144520_cleanup_container_registry_protection_rule_protected_up_to_access_levels_rename.rb \n20240416144924_remove_catalog_resource_components_path_column.rb \n20240419035359_add_workspace_variables_project_id_trigger.rb \n20240419035360_queue_backfill_workspace_variables_project_id.rb \n20240419035507_add_work_item_parent_links_namespace_id_trigger.rb \n20240419035508_queue_backfill_work_item_parent_links_namespace_id.rb \n20240419035619_add_wiki_repository_states_project_id_trigger.rb \n20240419035620_queue_backfill_wiki_repository_states_project_id.rb \n20240419122328_swap_vulnerability_feedback_pipeline_id_convert_to_bigint.rb \n20240419131607_swap_packages_build_infos_pipeline_id_convert_to_bigint.rb \n20240421011547_sync_index_for_pipelines_id_bigint_related.rb \n20240421014253_validate_fk_id_bigint4_ci_pipelines_p_ci_builds_ci_pipelines_config_p_ci_stages.rb \n20240422070036_swap_ci_pipelines_pk_with_bigint_p_ci_builds_p_ci_stages.rb \n20240422080018_swap_merge_trains_pipeline_id_convert_to_bigint.rb \n20240422163959_queue_disable_allow_runner_registration_on_namespace_settings_for_gitlab_com.rb \n20240422164345_remove_user_details_onboarding_step_url_column.rb \n20240422164718_add_tmp_index_environments_on_flux_resource_path.rb \n20240422165035_update_kustomization_api_version.rb \n20240422165424_remove_tmp_index_environments_on_flux_resource_path.rb \n20240422232001_finalize_backfill_has_merge_request_of_vulnerability_reads.rb \n20240423020601_remove_idx_merge_requests_on_target_project_id_and_iid_opened.rb \n20240423022641_drop_idx_merge_requests_on_target_project_id_and_locked_state.rb \n20240423024034_drop_index_merge_requests_on_target_project_id_and_iid_and_state_id.rb \n20240423035245_sync_index_for_pipelines_id_bigint_part5.rb \n20240423035625_prepare_async_index_removal_for_vulnerabilities.rb \n20240423235307_swap_columns_for_p_ci_builds_project_id.rb \n20240424100836_ensure_backfill_merge_request_metrics_pipeline_id_convert_to_bigint_is_completed.rb \n20240424100929_create_indexes_for_merge_request_metrics_pipeline_id_convert_to_bigint.rb \n20240424103758_prepare_async_index_for_builds_part6.rb \n20240424111535_swap_merge_request_metrics_pipeline_id_convert_to_bigint.rb \n20240424120001_remove_unique_index_for_ml_model_packages_on_project_id_name_version.rb \n20240424120002_add_unique_index_for_ml_model_packages_on_project_id_name_version.rb \n20240424180330_remove_partition_p_ci_job_artifacts_project_id_idx.rb \n20240424183213_backfill_deployment_approval_data.rb \n20240425133709_finalize_feedback_to_state_transition_migration.rb \n20240425140717_finalize_vulnerability_links_creation.rb \n20240425170527_remove_foreign_keys_geo_event_log.rb \n20240425182054_remove_unused_columns_geo_event_log.rb \n20240425205205_queue_remove_namespace_from_os_type_sbom_components.rb \n20240426135340_prepare_async_index_to_execution_config_id_in_ci_build.rb \n20240429113537_ensure_backfill_vulnerability_occurrence_pipelines_id_to_bigint_is_completed.rb \n20240429113608_prepare_async_indexes_for_vulnerability_occurrence_pipelines_pipeline_to_bigint.rb \n20240429205901_remove_the_index_ci_pipeline_artifacts_on_pipeline_id.rb \n20240430004051_finalize_backfill_has_remediations_of_vulnerability_reads.rb \n20240430015450_sync_index_for_builds_user_id_bigint.rb \n20240430015514_swap_columns_for_p_ci_builds_user_id.rb \n20240430111455_finalize_backfill_vulnerability_reads_cluster_agent_migration.rb \n20240501044235_index_approval_merge_request_rule_sources_on_project_id.rb \n20240501044236_add_approval_merge_request_rule_sources_project_id_fk.rb \n20240501044237_add_approval_merge_request_rule_sources_project_id_trigger.rb \n20240501044238_queue_backfill_approval_merge_request_rule_sources_project_id.rb \n20240501201630_remove_code_suggestions_enabled_project_setting.rb \n20240502044605_remove_create_empty_embeddings_records_worker.rb \n20240502062514_add_foreign_key_from_pipeline_to_ci_builds_to_execution_configs.rb \n20240502120047_index_vulnerability_reads_for_common_group_level_query.rb \n20240503103337_queue_backfill_epic_basic_fields_to_work_item_record.rb \n20240503165628_remove_foreign_key_geo_hashed_storage_migrated_events.rb \n20240503170147_drop_table_geo_hashed_storage_migrated_events.rb \n20240503171707_remove_foreign_key_geo_hashed_storage_attachments_events.rb \n20240503171904_drop_table_geo_hashed_storage_attachements_events.rb \n20240503173034_remove_foreign_key_geo_repository_updated_events.rb \n20240503173210_drop_table_geo_repository_updated_events.rb \n20240503174054_remove_foreign_key_geo_repository_renamed_events.rb \n20240503174241_drop_table_geo_repository_renamed_events.rb \n20240503174832_remove_foreign_key_geo_repository_created_events.rb \n20240503175120_drop_table_geo_repository_created_events.rb \n20240503175735_drop_table_geo_repository_deleted_events.rb \n20240503180347_remove_foreign_key_geo_reset_checksum_events.rb \n20240503180517_drop_table_geo_reset_checksum_events.rb \n20240504042340_add_index_catalog_resources_on_usage_count.rb \n20240507194416_drop_index_abuse_reports_on_user_id.rb \n20240507194839_drop_index_board_group_recent_visits_on_user_id.rb \n20240507231644_add_index_members_on_lower_invite_email.rb \n20240508064453_drop_index_ci_pipeline_config_on_pipeline_id.rb \n20240508072011_drop_index_ci_runner_manager_build_on_runner_machine_id.rb \n20240508085441_re_add_redirect_routes_path_index.rb ..\n\n```\n\n3. when run \"gitlab-ctl check-config\":\n\n\n\n```\nMalformed configuration JSON file found at \/opt\/gitlab\/embedded\/nodes\/VM-12-10-centos.json.\nThis usually happens when your last run of `gitlab-ctl reconfigure` didn't complete successfully.\nThis file is used to check if any of the unsupported configurations are enabled,\nand hence require a working reconfigure before upgrading.\nPlease run `sudo gitlab-ctl reconfigure` to fix it and try again.\n\n```\n\n4.when run \"gitlab-ctl reconfigure\":\n\n\n\n```\nRunning handlers:\n[2024-05-17T13:02:30+08:00] ERROR: Running exception handlers\nThere was an error running gitlab-ctl reconfigure:\n\nversion_file[Create version file for Gitlab KAS] (gitlab-kas::enable line 67) had an error: RuntimeError: Execution of the command `\/opt\/gitlab\/embedded\/bin\/gitlab-kas --version` failed with a non-zero exit code (2)\nstdout: \nstderr: panic: failed to parse \"2024-05-15T09:09:49+0000\" into RFC3339 compliant time object, because: parsing time \"2024-05-15T09:09:49+0000\" as \"2006-01-02T15:04:05Z07:00\": cannot parse \"+0000\" as \"Z07:00\". Fix the build process.\n\ngoroutine 1 [running]:\ngitlab.com\/gitlab-org\/cluster-integration\/gitlab-agent\/v17\/cmd.init.0()\n \/var\/cache\/omnibus\/src\/gitlab-kas\/cmd\/build_info.go:18 +0xf3\n\n\n\nRunning handlers complete\n[2024-05-17T13:02:30+08:00] ERROR: Exception handlers complete\nInfra Phase failed. 2 resources updated in 14 seconds\n[2024-05-17T13:02:30+08:00] FATAL: Stacktrace dumped to \/opt\/gitlab\/embedded\/cookbooks\/cache\/cinc-stacktrace.out\n[2024-05-17T13:02:30+08:00] FATAL: ---------------------------------------------------------------------------------------\n[2024-05-17T13:02:30+08:00] FATAL: PLEASE PROVIDE THE CONTENTS OF THE stacktrace.out FILE (above) IF YOU FILE A BUG REPORT\n[2024-05-17T13:02:30+08:00] FATAL: ---------------------------------------------------------------------------------------\n[2024-05-17T13:02:30+08:00] FATAL: RuntimeError: version_file[Create version file for Gitlab KAS] (gitlab-kas::enable line 67) had an error: RuntimeError: Execution of the command `\/opt\/gitlab\/embedded\/bin\/gitlab-kas --version` failed with a non-zero exit code (2)\nstdout: \nstderr: panic: failed to parse \"2024-05-15T09:09:49+0000\" into RFC3339 compliant time object, because: parsing time \"2024-05-15T09:09:49+0000\" as \"2006-01-02T15:04:05Z07:00\": cannot parse \"+0000\" as \"Z07:00\". Fix the build process.\n\ngoroutine 1 [running]:\ngitlab.com\/gitlab-org\/cluster-integration\/gitlab-agent\/v17\/cmd.init.0()\n \/var\/cache\/omnibus\/src\/gitlab-kas\/cmd\/build_info.go:18 +0xf3\n\n\n\n```\n\nIs there anyone facing this issue? Please help to fix, thanks very much.","questionMetadata":{"type":"version","tag":"bash","level":"intermediate"},"answer":"If the kubernetes agent server `gitlab_kas` is not required you can try disabling it by adding the line `gitlab_kas['enable'] = false` to `\/etc\/gitlab\/gitlab.rb`.\n\n\nAfter this, reattempting the `gitlab-ctl reconfigure` worked for us"}
{"questionId":"70dca93eb63b8442dabdab3647f09010c1c2ad519869dc863b15604c27cb3095","question":"Print lines from their beginning to selected characters\nI want to print lines from their beginning to selected characters. \n\nExample:\n\n\n\n```\na\/b\/i\/c\/f\/d \na\/e\/b\/f\/r\/c \na\/f\/d\/g \na\/n\/m\/o \na\/o\/p\/d\/l \na\/b\/c\/d\/e \na\/c\/e\/v \na\/d\/l\/k\/f \na\/e\/f\/c \na\/n\/d\/c\n\n```\n\nCommand:\n\n\n\n```\n .\/hhh.csh 03_input.txt c\n\n```\n\nOutput:\n\n\n\n```\na\/b\/i\/c \na\/e\/b\/f\/r\/c \na\/b\/c \na\/c \na\/e\/f\/c \na\/n\/d\/c\n\n```\n\nI use this code but in the condition `$i ==a` I don't see the values \u200b\u200bbeing checked against the *first* value I assigned.\n\n\n\n```\nawk' \nBEGIN{ \nARGC=2 \n first = ARGV[2] \n} \n{ \nfor(i=1;i<=NF;++i){ \narr[i]=$i \nif($i == first){ \nprint arr[i] \n} \n} \n}' \"$1\" \"$2\" ","questionMetadata":{"type":"implementation","tag":"awk","level":"beginner"},"answer":"As `awk` is tagged, filter for a `match` on `\/c\/`, then `print` the `substr` from position `1` to position `RSTART`, which is where the pattern was found:\n\n\n\n```\n# expecting the filename (e.g. 03_input.txt) on $1,\n# and the pattern (e.g. c) on $2\nawk -v pat=\"$2\" 'match($0, pat) {print substr($0, 1, RSTART)}' \"$1\"\n\n```\n\n\n```\na\/b\/i\/c\na\/e\/b\/f\/r\/c\na\/b\/c\na\/c\na\/e\/f\/c\na\/n\/d\/c\n\n```\n\nNote: You may want to replace `RSTART` with `RSTART+RLENGTH-1` if pattern longer than one character are expected."}
{"questionId":"af9821794743d247f39a88581f53a6edcfc72018a1ec07a104a159a1e126e4fd","question":"How to generalize a template specialization for any variadic template type?\nI have the following type function to calculate whether some type `T` is part of the list of types in an `std::tuple`:\n\n\n\n```\ntemplate<typename T, typename Tuple>\nstruct IsInTuple;\n\ntemplate<typename T, typename ...Ts>\nstruct IsInTuple<T, std::tuple<Ts...>>\n{\n static constexpr bool value = std::disjunction_v<std::is_same<T, Ts>...>;\n};\n\n```\n\nMy question is, is it possible to generalize this function for any variadic template type taking a variadic list of types so that it not only works for `std::tuple`'s, but for example also for `std::variant`'s?","questionMetadata":{"type":"conceptual","tag":"c++","level":"advanced"},"answer":"> \n> Is it possible to generalize this function for any variadic template type taking a variadic list of types so that it not only works for `std::tuple`s, but for example also for `std::variant`s?\n> \n> \n> \n\n\nYes, you can. Just use [template template parameter](https:\/\/stackoverflow.com\/questions\/213761\/what-are-some-uses-of-template-template-parameters) to generalize the type traits.\n\n\n\n```\ntemplate<typename T, typename Class> struct IsInTypeList;\n\ntemplate<typename T\n , template<typename...> class Class, typename... Ts>\nstruct IsInTypeList<T, Class<Ts...>>\n{\n static constexpr bool value = (std::is_same_v<T, Ts> || ...);\n \/\/ or std::disjunction_v<std::is_same<T, Ts>...>;\n};\n\n\/\/ Example usage\nstatic_assert(IsInTypeList<int, std::tuple<int, float, double>>::value, \"int is not in the tuple\");\nstatic_assert(!IsInTypeList<char, std::tuple<int, float, double>>::value, \"char is in the tuple\");\nstatic_assert(!IsInTypeList<char, std::variant<int, float, double>>::value, \"char is in the variant\");\n\n```\n\n***[See live demo](https:\/\/gcc.godbolt.org\/z\/46cs1P7c9)***"}
{"questionId":"faadc28fbf6d0ab28df4b00e782416de9db0f7e4e564957b24032dae175cf655","question":"MySQL 8 - Duplicate entry '1' for key 'tablespaces.PRIMARY' upon triggering a TRUNCATE command\nI am trying to empty the content of a table using the following command:\n\n\n`TRUNCATE TABLE <table_name>;`\n\n\nThe table engine is InnoDB.\n\n\nUpon executing this command, I receive the following error:\n\n\n`SQL Error [1062] [23000]: Duplicate entry '1' for key 'tablespaces.PRIMARY'`\n\n\nAs I keep retrying, the number of the duplicate entry increments and the error becomes, e.g.:\n\n\n`SQL Error [1062] [23000]: Duplicate entry '2' for key 'tablespaces.PRIMARY'`\n\n\nFor context, the schema where I execute this query in does not contain a \"tablespaces\" table. I suspected this table could be in the `information_schema` schema and it's indeed the case, but the tablespace table is completely empty.\n\n\nHow to explain this and how to work around this problem?\n\n\n**UPDATE**\n\n\nFor context, this error started appearing after a migration from MySQL 5.7 to MySQL 8 on AWS Aurora\n\n\n**UPDATE 2**\n\n\nAdditional context:\nEven simply creating a table triggers the error as well.\n\n\nI'm performing this operation on a restored snapshot (I tried using the \"copy-on-write\" and \"full-copy\" methods, with same results).\n\n\nI was afraid the main DB from which the snapshot is taken was equally affected by the bug, but it's not, as I can create tables there.","questionMetadata":{"type":"version","tag":"sql","level":"intermediate"},"answer":"The solutions was sent to me by [@sam-weston](https:\/\/stackexchange.com\/users\/13880286\/sam-weston), who received the information through AWS support.\n\n\nIt appears to be a bug introduced with version 3.05.2 of Aurora MySQL.\n\n\nThe bug was fixed in Aurora MySQL 3.06.0, the solution is therefore to upgrade to this version and future snapshots will not have the issue.\n\n\nAWS also provided the following workaround:\n\n\n\n> \n> The suggested workaround is to restart the writer instance 2 times for the cache to be cleared and then the auto increment continues to work. If this workaround does not work, we kindly request that you restore the snapshot and change the version to 3.06.0 to resolve the issue. This issue has been fixed in this version.\n> \n> \n>"}
{"questionId":"9063c0b8a7e541f866dfce1399859b0bd921db3395317914575c6f64a64ea162","question":"Recover lost code after stash and checkout\nI seem to have misplaced (that's my hope) dozens of hours worth of work. Here is the history of what I did to my Git repo:\n\n\n- I had worked on some code modifications that I wanted to temporarily stash and revert to the previous commit. I had 'Branch A' checked out at the time.\n- `git stash`\n- `git checkout 44f7b43f355e30e15ab9081c2f8e92188277ca53` (I think this is where I went wrong - this was the previous commit I wanted)\n- `git revert 44f7b43f355e30e15ab9081c2f8e92188277ca53` (Clearly I didn't know what I was doing)\n- `git checkout 44f7b43f355e30e15ab9081c2f8e92188277ca53`\n- I now had my previous commit code.\n- `git stash apply` (I wanted to move back forward with my previous changes)\n- `git stash pop` (At this point I assumed I was back to where I wanted to be working on Branch A. I saw all my recent changes present again.)\n- Dozens of hours worth of work done. What I didn't realize is that the command prompt still showed this: `vscode \u279c \/workspaces\/git\/aws-shared_network-terraform\/terraform (7c76f7e)` indicating I wasn't actually in Branch A.\n- ran a commit in vscode\n- ran a sync in vscode. I received an error that I needed to checkout a branch first to sync.\n- I checked out Branch A.\n- \"oh poop\" moment because all my work disappeared.\n- `git checkout 44f7b43f355e30e15ab9081c2f8e92188277ca53` (Code was not restored)\n\n\nI am now sitting at:\n`vscode \u279c \/workspaces\/git\/aws-shared_network-terraform\/terraform (44f7b43) $` \n\n\nIs there a way to recover this?\n\n\nI found that if I run `git log --reflog` I see my commit from today:\n\n\n\n```\ncommit 9746368659f680c934fe97e7c70881ae4882400f\nAuthor: Me\nDate: Wed Apr 3 17:46:13 2024 +0000\n\n enable cdn redirect and cleanup\n\n```\n\nI don't want to do anything else until someone provides direction.","questionMetadata":{"type":"version","tag":"bash","level":"intermediate"},"answer":"You're safe since you did commit your changes and they're in the reflog. The only thing that's missing is they're not part of a named branch. To fix that:\n\n\n1. Check out Branch A.\n2. Run `git cherry-pick 9746368659f680c934fe97e7c70881ae4882400f` to add your commit to Branch A.\n\n\n(Note: It won't literally add that commit hash to the branch. It will replay the commit on the branch as if you had committed to Branch A originally. You'll get a new commit with the same contents and commit message but different hash.)"}
{"questionId":"d07c3016c38165bc209d241c9f6569cd029bfed0807ddab558fad264b11c489e","question":"Unexpected Output from Bit Shifting using len on a string vs. string subslice\nI have a Go program that performs bit shifting and division operations on the length of a string constant, but the output is not what I expect. Here's the code:\n\n\n\n```\npackage main\n\nimport \"fmt\"\n\nconst s = \"123456789\" \/\/ len(s) == 9\n\n\/\/ len(s) is a constant expression,\n\/\/ whereas len(s[:]) is not.\nvar a byte = 1 << len(s) \/ 128\nvar b byte = 1 << len(s[:]) \/ 128\n\nfunc main() {\n fmt.Println(a, b) \/\/ outputs: 4 0\n}\n\n```\n\nIn this program, a and b are calculated using similar expressions involving bit shifting and division. However, the outputs for a and b are 4 and 0, respectively, which seems counterintuitive since both operations involve the same string length and similar arithmetic. Could someone explain why a and b produce different results?\n\n\n- What does the division by 128 and the bit shifting do in this context?\n- Why is len(s[:]) considered not a constant expression, and how does this affect the evaluation?\n\n\nI would appreciate any insights into how these expressions are evaluated differently and why they lead to different outputs in Go.","questionMetadata":{"type":"debugging","tag":"go","level":"intermediate"},"answer":"The difference comes from `len(s)` being a constant and `len(s[:])` not, resulting in the first shift being a constant shift, and the second being a non-constant shift.\n\n\nThe first example is a constant shift operation, will be carried out in the \"const\" space and the result will be converted to `byte` (as it fits into a byte).\n\n\nThe second example is a non-constant shift operation, so according to the spec, `1` will be converted to `byte` first, then the shift and division carried out as a `byte` value (the shift result doesn't fit into `byte`, so the result will be `0`), which divided by `128` will again be `0`.\n\n\nRelevant section from [Spec: Operators:](https:\/\/go.dev\/ref\/spec#Operators)\n\n\n\n> \n> The right operand in a shift expression must have [integer type](https:\/\/go.dev\/ref\/spec#Numeric_types) ([Go 1.13](https:\/\/go.dev\/ref\/spec#Go_1.13)) or be an untyped constant representable by a value of type `uint`. If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.\n> \n> \n> \n\n\nSee related: [Golang shift operator conversion](https:\/\/stackoverflow.com\/questions\/54998543\/golang-shift-operator-conversion\/54998695#54998695)\n\n\nAs to why `len(s[:])` isn't a constant, see [Spec: Slice expressions:](https:\/\/go.dev\/ref\/spec#Slice_expressions)\n\n\n\n> \n> Except for [untyped strings](https:\/\/go.dev\/ref\/spec#Constants), if the sliced operand is a string or slice, the result of the slice operation is a non-constant value of the same type as the operand. **For untyped string operands the result is a non-constant value of type `string`**.\n> \n> \n> \n\n\nYou could argue if the index values are also constants, slicing a constant string could result in a constant value, but this specialization is currently not available, and since it would change the results of current programs (such as in your question), it probably won't be made in the future too."}
{"questionId":"88c291ad84a2d088c105b68c2baeaf001929b430002db5a3c067bd23484dc924","question":"Trouble understanding Haskell function\nI'm having trouble understanding why the following code correctly compiles:\n\n\n\n```\nf :: a -> Maybe a\nf = return Just 3\n\n```\n\n`return Just` has a type of `Monad m => m (a -> Maybe a)` so I'm not sure why passing in an `Int` *unwraps* the monad.\n\n\nYou can even get a little crazy with it:\n\n\n\n```\nf :: a -> Maybe a\nf = return Just (Just . Just . Just . Just)\n\n```\n\nCan someone explain what exactly is going on here?","questionMetadata":{"type":"conceptual","tag":"haskell","level":"intermediate"},"answer":"You're using `return` from the reader monad `(->) Int`, which is defined as `const`. In this case `return Just` has type `Int -> a -> Maybe a`.\n\n\n\n```\nf = return Just 3\n = const Just 3\n = Just"}
{"questionId":"dc8cead00697865f3b02abda94f9d6264440d6ce2bde4dddab1757875008cc1a","question":"Count empty strings?\nIn R, suppose I have a vector like:\n\n\n\n```\nvector<-c(\"Red\", \" \", \"\", \"5\", \"\")\n\n```\n\nI want to count how many elements of this vector are just empty strings that only consist of either **spaces or no spaces at all**. For this very short vector, it is just **three**. The second, third, and fifth elements are just spaces or no spaces at all. They don't have any characters like letters, numbers, symbols, etc.\n\n\nIs there any function or method that will count this? I wanted something I could use on larger vectors rather than just looking at every element of the vector.","questionMetadata":{"type":"implementation","tag":"r","level":"beginner"},"answer":"Use `sum(grepl())` plus an appropriate regular expression:\n\n\n\n```\nvector<-c(\"Red\", \" \", \"\", \"5\", \"\")\nsum(grepl(\"^ *$\", vector))\n\n```\n\n- `^`: beginning of string\n- `*`: zero or more spaces\n- `$`: end of string\n\n\nIf you want to look for \"white space\" more generally (e.g. allowing tabs), use `\"^[[:space:]]*$\"`, although as pointed out in `?grep`, the definition of white space is locale-dependent ...\n\n\n`length(grep(...))` would also work, or `stringr::str_count(vector, \"^ *$\")`.\n\n\nFor what it's worth:\n\n\n\n```\n microbenchmark::microbenchmark(\n bolker = sum(grepl(\"^ *$\", vector)),\n rudolph = sum(! nzchar(trimws(vector))),\n baldur = sum(gsub(\" \", \"\", vector, fixed = TRUE) == \"\"),\n baldur2 = sum(! nzchar(gsub(\" \", \"\", vector, fixed = TRUE))))\n\nUnit: microseconds\n expr min lq mean median uq max neval cld\n bolker 10.499 10.8900 12.31869 11.8020 12.7990 40.976 100 a \n rudolph 19.306 20.0125 22.01722 20.7990 22.9480 66.815 100 b \n baldur 2.294 2.5700 2.76420 2.7455 2.8950 3.567 100 c\n baldur2 2.294 2.4740 2.66267 2.6450 2.7755 5.130 100 c\n\n```\n\n(@RuiBarradas not included because vs similar to @KonradRudolph). I'm surprised that @s\\_baldur's answer is so fast ... but also probably worth keeping in mind that this operation will be fast enough to not worry about efficiency unless it is a *large* part of your overall workflow ..."}
{"questionId":"6631ea6b6735ef32c978fea3e4c3602355c0175263bd2ecbc4441aa76e4c2d1b","question":"Why cant I chain after std::ranges::views::join?\nI recently discovered the ranges stdandard library and encountered a strange behavior. When chaining multiple range adaptors, I can't chain after using `std::ranges::views::join`:\n\n\n`vec | std::ranges::views::slide(2) | std::ranges::views::join`\n\n\nworks as intended, but\n\n\n`vec | std::ranges::views::slide(2) | std::ranges::views::join | std::ranges::views::slide(2)`\n\n\nwill result in compilation error:\n\n\n\n```\nerror: no match for 'operator|' (operand types are 'std::ranges::join_view<std::ranges::slide_view<std::ranges::ref_view<std::vector<int> > > >' and 'std::ranges::views::__adaptor::_Partial<std::ranges::views::_Slide, int>')\n 265 | vec | std::ranges::views::slide(2) | std::ranges::views::join | std::ranges::views::slide(2);\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n | | |\n | | std::ranges::views::__adaptor::_Partial<std::ranges::views::_Slide, int>\n | std::ranges::join_view<std::ranges::slide_view<std::ranges::ref_view<std::vector<int> > > >\n\n```\n\nWhy is that? and what do I have to do to make multiple joins work?\n\n\nThanks in advance!","questionMetadata":{"type":"debugging","tag":"c++","level":"intermediate"},"answer":"slide_view` requires `forward_range` since the elements need to be traversed more than once.\n\n\nThis is not the case for `join_view` in your case, because the latter joins a nested range whose elements are prvalue ranges (`slide_view` acts on `vector` will produce a range whose element type is prvalue `span` as \"window\"), which makes it just an `input_range`.\n\n\nThe constraints are not satisfied so compilation fails."}
{"questionId":"696bd4862aede6476af5f44460b0effb65d757dde63a522f52a645638e0bed90","question":"Recaptcha hitting OUR server with an api2\/clr POST call (resulting in 404s)\nRecently the last few days, we have been seeing a growing number of 404s all with the following format: (some stuff redacted)\n\n\n\n```\n1.1.1.1 - - [11\/Jul\/2024:14:00:56 +0000] \"POST \/recaptcha\/api2\/clr?k={our_site_key} HTTP\/1.1\" 404 5126 \"https:\/\/{some_url_on_our_site}\" \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit\/605.1.15 (KHTML, like Gecko) Version\/17.5 Safari\/605.1.15\"\n2.2.2.2 - - [11\/Jul\/2024:14:16:46 +0000] \"POST \/recaptcha\/api2\/clr?k={our_site_key} HTTP\/1.1\" 404 1698 \"https:\/\/{some_url_on_our_site}\" \"Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/126.0.0.0 Safari\/537.36 Edg\/126.0.0.0\"\n3.3.3.3 - - [11\/Jul\/2024:18:08:07 +0000] \"POST \/recaptcha\/api2\/clr?k={our_site_key} HTTP\/1.1\" 404 1698 \"https:\/\/{some_url_on_our_site}\" \"Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/126.0.0.0 Safari\/537.36\"\n4.4.4.4 - - [11\/Jul\/2024:18:13:37 +0000] \"POST \/recaptcha\/api2\/clr?k={our_site_key} HTTP\/1.1\" 404 1698 \"https:\/\/{some_url_on_our_site}\" \"Mozilla\/5.0 (Linux; Android 10; K) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/126.0.0.0 Mobile Safari\/537.36\"\n5.5.5.5 - - [11\/Jul\/2024:19:11:10 +0000] \"POST \/recaptcha\/api2\/clr?k={our_site_key} HTTP\/1.1\" 404 1698 \"https:\/\/{some_url_on_our_site}\" \"Mozilla\/5.0 (Windows NT 10.0; Win64; x64; rv:127.0) Gecko\/20100101 Firefox\/127.0\"\n6.6.6.6 - - [11\/Jul\/2024:19:47:14 +0000] \"POST \/recaptcha\/api2\/clr?k={our_site_key} HTTP\/1.1\" 404 1150 \"https:\/\/{some_url_on_our_site}\" \"Mozilla\/5.0 (X11; Linux x86_64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/126.0.0.0 Safari\/537.36\"\n\n```\n\nAs you can see, its a wide array of machines and browsers. That url doesnt exist on OUR server... that should be going to googles server, but for some reason it's not.\n\n\nAlso, we have not changed our integration of Recaptcha v2 Invisible for a long time now, so this 'new behavior' is not our doing (that we know of). I am also unable to reproduce this myself on my macs\/pcs, but the amount of these are growing in frequency (maybe at some point I WILL be able to reproduce it).\n\n\nAnyone have any ideas what may be going on here?\n\n\n\n\n---\n\n\nEdit: While doing a capture of the post body (php:\/\/input), the data it's trying to send to our server is an encrypted pile. Its a large amount of binary which needs a key (and neither our public nor secret google keys work that I've tried).","questionMetadata":{"type":"debugging","tag":"other","level":"intermediate"},"answer":"there's an open incident at Google's Recaptcha service: <https:\/\/status.cloud.google.com\/incidents\/MzARofVtutSd2HB5vmkT>"}
{"questionId":"3dadd5ec50a45c221810e4d39743466d5bbcb25bb852b819e7106e94ff27eacf","question":"VS Code file .vscode\/extensions.json select custom version of one extension\nI have this code in .vscode\/extensions.json\n\n\n\n```\n{\n \"recommendations\": [\n johnpapa.Angular2\n ]\n}\n\n```\n\nbut I don't want to install the *latest* version of johnpapa.Angular2 (currently version 17) I want to install version 11.\n\n\nHow can I write this custom version in .vscode\/extensions.json file?","questionMetadata":{"type":"version","tag":"other","level":"beginner"},"answer":"You can't. It's not supported. It was raised before as a feature-request and it was deemed out of scope. See [Allow specifying a version for workspace recommended extensions #138048](https:\/\/github.com\/microsoft\/vscode\/issues\/138048). Though recently, it has been reopened and the maintainers are seeing if they can support the request.\n\n\nFYI- once you've installed an extension, you can [roll back to an older version](\/q\/42626065\/11107541) and [disable automatic updates for that particular extension](\/q\/51965933\/11107541)."}
{"questionId":"969a7dc56e93f614551fccf9b970a8b45ab22ac8cb011843955b6921c0c8d734","question":"Why is Next-Auth creating two tokens in the browser?\nI am using Next Auth with only 1 provider, Azure AD. Usually, Next-Auth with create a session token (`__Secure-next-auth.session-token`) that I can send to my backend and decode for authentication.\n\n\nRecently this token disappeared and in its place there are now two tokens:\n\n\n- `__Secure-next-auth.session-token.0`\n- `__Secure-next-auth.session-token.1`\n\n\nNeither of these tokens are properly formatted JWTs that my backend can decode.\n\n\nWhat are these new tokens and how can I get the old one back?\n\n\n\n\n---\n\n\n**route.ts**\n\n\n\n```\nimport NextAuth from \"next-auth\"\nimport AzureADProvider from \"next-auth\/providers\/azure-ad\"\n \nconst providers = [\n AzureADProvider({\n clientId: process.env.AZURE_AD_CLIENT_ID || '',\n clientSecret: process.env.AZURE_AD_CLIENT_SECRET || '',\n tenantId: process.env.AZURE_AD_TENANT_ID,\n }),\n\n]\n\nexport const authOptions = {\n providers: providers\n}\n\nconst handler = NextAuth(authOptions)\n\nexport { handler as GET, handler as POST }","questionMetadata":{"type":"debugging","tag":"javascript","level":"intermediate"},"answer":"[from docs](https:\/\/next-auth.js.org\/configuration\/options):\n\n\n\n> \n> Cookies in NextAuth.js are chunked by default, meaning that once they\n> reach the 4kb limit, we will create a new cookie with the .{number}\n> suffix and reassemble the cookies in the correct order when parsing \/\n> reading them. This was introduced to avoid size constraints which can\n> occur when users want to store additional data in their sessionToken,\n> for example.\n> \n> \n> \n\n\nif you concatenate both tokens, you should get the complete token."}
{"questionId":"008812baf9398c10f163cb6ee5a1dcc2ab28d53ed1796418d3e3948eeafc0b98","question":"Why is C# DateTime.Now\/DateTime.UtcNow ahead of SQL Server's SYSUTCDATETIME()\/SYSDATETIME() even though C# code executes before the SQL Query\nI want to know the reason why my C# date is larger than the SQL date even though the C# code is running first and after that the SQL query,\n\n\nLogically the SQL date should be greater than C# date.\n\n\nFor your reference the .NET application and SQL Server are on my local machine.\n\n\nC# Code:\n\n\n\n```\n\nusing System.Data;\nusing System.Data.SqlClient;\n\nfor (int i = 1; i <= 20; i++)\n{\n AddRecord();\n}\nConsole.WriteLine(\"20 records added in database....\");\n\nvoid AddRecord()\n{\n try\n {\n string ConnectionString = @\"data source=OM5\\SQL2019; database=TestDb; integrated security=SSPI\";\n using (SqlConnection connection = new SqlConnection(ConnectionString))\n {\n SqlCommand cmd = new SqlCommand()\n {\n CommandText = \"SP_AddRecord\",\n Connection = connection,\n CommandType = CommandType.StoredProcedure\n };\n\n SqlParameter param1 = new SqlParameter\n {\n ParameterName = \"@CSharp_DateNow\",\n SqlDbType = SqlDbType.DateTime2,\n Value = DateTime.Now,\n Direction = ParameterDirection.Input\n };\n cmd.Parameters.Add(param1);\n\n SqlParameter param2 = new SqlParameter\n {\n ParameterName = \"@CSharp_DateUTCNow\",\n SqlDbType = SqlDbType.DateTime2,\n Value = DateTime.UtcNow,\n Direction = ParameterDirection.Input\n };\n cmd.Parameters.Add(param2);\n\n connection.Open();\n cmd.ExecuteNonQuery();\n }\n }\n catch (Exception ex)\n {\n Console.WriteLine($\"Exception Occurred: {ex.Message}\");\n }\n}\n\n\n```\n\nSQL:\n\n\n\n```\nCREATE TABLE [dbo].[Records](\n [Id] [int] PRIMARY KEY IDENTITY(1,1) NOT NULL,\n [SQL_SysDateTime] [datetime2](7) NOT NULL,\n [CSharp_DateNow] [datetime2](7) NOT NULL,\n [SQL_SysUTCDateTime] [datetime2](7) NOT NULL,\n [CSharp_DateUTCNow] [datetime2](7) NOT NULL\n)\n\nCREATE OR ALTER PROCEDURE [dbo].[SP_AddRecord]\n@CSharp_DateNow datetime2,\n@CSharp_DateUTCNow datetime2\nAS\nBEGIN\n SET NOCOUNT ON;\n\n insert into Records(SQL_SysDateTime, CSharp_DateNow, SQL_SysUTCDateTime, CSharp_DateUTCNow) values\n (SYSDATETIME(),@CSharp_DateNow,SYSUTCDATETIME(),@CSharp_DateUTCNow)\nEND\n\n```\n\nResult in table\n\n\n\n\n| SQL\\_SysDateTime | CSharp\\_DateNow | Diff. (MS) | SQL\\_SysUTCDateTime | CSharp\\_DateUTCNow | Diff. (MS) |\n| --- | --- | --- | --- | --- | --- |\n| 2024-07-26 13:26:35.2898391 | 2024-07-26 13:26:34.9701658 | 319 | 2024-07-26 07:56:35.2898391 | 2024-07-26 07:56:34.9726788 | 317 |\n| 2024-07-26 13:26:35.3054610 | 2024-07-26 13:26:35.3174393 | -12 | 2024-07-26 07:56:35.3054610 | 2024-07-26 07:56:35.3174492 | -12 |\n| 2024-07-26 13:26:35.3210815 | 2024-07-26 13:26:35.3217354 | 0 | 2024-07-26 07:56:35.3210815 | 2024-07-26 07:56:35.3217461 | 0 |\n| 2024-07-26 13:26:35.3210815 | 2024-07-26 13:26:35.3261818 | -5 | 2024-07-26 07:56:35.3210815 | 2024-07-26 07:56:35.3261915 | -5 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3310309 | 5 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3310384 | 5 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3411312 | -5 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3411394 | -5 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3418632 | -5 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3418676 | -5 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3430069 | -7 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3430104 | -7 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3437519 | -7 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3437554 | -7 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3446140 | -8 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3446172 | -8 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3452865 | -9 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3452894 | -9 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3459309 | -9 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3459336 | -9 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3466520 | -10 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3466552 | -10 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3475280 | -11 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3475305 | -11 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3486445 | -12 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3486474 | -12 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3492964 | -13 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3492991 | -13 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3501936 | -14 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3501961 | -14 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3506370 | -14 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3506392 | -14 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3511339 | -15 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3511362 | -15 |\n| 2024-07-26 13:26:35.3367030 | 2024-07-26 13:26:35.3517053 | -15 | 2024-07-26 07:56:35.3367030 | 2024-07-26 07:56:35.3517087 | -15 |\n\n\nI want an actual reason or an authentic source which can explain this.","questionMetadata":{"type":"conceptual","tag":"sql","level":"intermediate"},"answer":"These values are precise but not accurate.\n\n\nIf you take the distinct values from `SQL_SysDateTime` and compare them...\n\n\n\n```\nSELECT MS_Diff = DATEDIFF(NANOSECOND, '2024-07-26 13:26:35.2898391', '2024-07-26 13:26:35.3054610')\/1E6, \n MS_Diff = DATEDIFF(NANOSECOND, '2024-07-26 13:26:35.3054610', '2024-07-26 13:26:35.3210815')\/1E6, \n MS_Diff = DATEDIFF(NANOSECOND, '2024-07-26 13:26:35.3210815', '2024-07-26 13:26:35.3367030')\/1E6\n\n```\n\nThis returns `15.6219`, `15.6205`, `15.6215` as differences between them (in ms).\n\n\nAs documented [here](https:\/\/learn.microsoft.com\/en-us\/sql\/t-sql\/functions\/date-and-time-data-types-and-functions-transact-sql?view=sql-server-ver16#higher-precision-system-date-and-time-functions) SQL Server uses [`GetSystemTimeAsFileTime()`](https:\/\/learn.microsoft.com\/en-us\/windows\/win32\/api\/sysinfoapi\/nf-sysinfoapi-getsystemtimeasfiletime) for `SYSDATETIME()`\/`SYSUTCDATETIME()`.\n\n\nRaymond Chen indicates [here](https:\/\/devblogs.microsoft.com\/oldnewthing\/20170921-00\/?p=97057) that by default `GetSystemTimeAsFileTime()` is not especially accurate though mentions default refresh periods for the value returned by it of 55ms or 10ms rather than 15.62 so presumably this has changed since then.\n\n\nVarious sites indicate that the [default timer resolution in Windows 10 is 15.6 ms](https:\/\/timerresolution.net\/#What_Is_Timer_Resolution_and_How_Does_It_work) ([or more specifically 15625000ns](https:\/\/www.google.com\/search?q=The%20default%20platform%20timer%20resolution%20is%2015.6ms%20%2815625000ns%29)) so the above gaps are in line with that.\n\n\nFor C# the documentation for `DateTime.UtcNow` doesn't [look any more promising](https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.datetime.utcnow?view=net-8.0#remarks)\n\n\n\n> \n> The resolution of this property depends on the system timer, which\n> depends on the underlying operating system. It tends to be between 0.5\n> and 15 milliseconds.\n> \n> \n> \n\n\nSo there is still the question as to how that is achieving the greater accuracy.\n\n\nYou have tagged .NET core. Per [this pull request](https:\/\/github.com\/dotnet\/coreclr\/pull\/9736) it now calls [`GetSystemTimePreciseAsFileTime`](https:\/\/learn.microsoft.com\/en-us\/windows\/win32\/api\/sysinfoapi\/nf-sysinfoapi-getsystemtimepreciseasfiletime) when available (one of the later ones mentioned in the Raymond Chen post above).\n\n\nOn my local machine (Win 11) I do mostly see diffs of around 1ms when running the following test. (But running `powercfg -energy` does tell me that various processes I have running (including `chrome.exe` and `MongoDB`) have [requested](https:\/\/learn.microsoft.com\/en-gb\/windows\/win32\/api\/timeapi\/nf-timeapi-timebeginperiod?redirectedfrom=MSDN) a low time interval for the Platform Timer Resolution)\n\n\n\n```\nSET NOCOUNT ON;\n\nDECLARE @Times TABLE(insert_time datetime2)\n\nDECLARE @Counter INT = 0\n\nWHILE @Counter < 10000\nBEGIN\nINSERT @Times VALUES (SYSUTCDATETIME())\nSET @Counter+=1;\nEND\n\n\nSELECT [rowcount] = COUNT(*), \n insert_time, \n prev_insert_time = LAG(insert_time) OVER (ORDER BY insert_time),\n diff_ms = DATEDIFF(NANOSECOND,LAG(insert_time) OVER (ORDER BY insert_time), insert_time)\/1e6\nFROM @Times\nGROUP BY insert_time\n\n```\n\nSQL Server doesn't currently have any native way of calling `GetSystemTimePreciseAsFileTime` and returning `datetime2(7)` so if this is important to you you will need to do it outside of the database (you *could* also use CLR integration for this but then the assembly would need to be [marked as unsafe](https:\/\/stackoverflow.com\/a\/35347377\/73226) to invoke the WinAPI function).\n\n\nRunning the above on Azure SQL database I got the following results so doesn't look like it is refreshed any more frequently there (and you only get ~64 unique values per second).\n\n\ninterestingly replacing `SYSUTCDATETIME()` with `GETUTCDATE()` I do get diffs of `3.3333`\/`3.3334` ms so this does appear less precise but more accurate. Presumably this as a result of the [\"correction\" mentioned here](https:\/\/dba.stackexchange.com\/a\/175723\/3690).\n\n\nThis situation appears to me to be less than ideal. There is a feedback request [Have SYSDATETIME() return value from GetSystemTimePreciseAsFileTime()](https:\/\/feedback.azure.com\/d365community\/idea\/020e2ee3-5325-ec11-b6e6-000d3a4f0da0) but it only has 3 votes and is tagged \"Archived\" so not sure if that means that it will never be considered.\n\n\n\n\n| rowcount | insert\\_time | prev\\_insert\\_time | diff\\_ms |\n| --- | --- | --- | --- |\n| 563 | 2024-07-29 07:21:30.6607494 | NULL | NULL |\n| 646 | 2024-07-29 07:21:30.6763740 | 2024-07-29 07:21:30.6607494 | 15.6246 |\n| 659 | 2024-07-29 07:21:30.6919988 | 2024-07-29 07:21:30.6763740 | 15.6248 |\n| 673 | 2024-07-29 07:21:30.7076257 | 2024-07-29 07:21:30.6919988 | 15.6269 |\n| 666 | 2024-07-29 07:21:30.7232517 | 2024-07-29 07:21:30.7076257 | 15.626 |\n| 659 | 2024-07-29 07:21:30.7390662 | 2024-07-29 07:21:30.7232517 | 15.8145 |\n| 667 | 2024-07-29 07:21:30.7545026 | 2024-07-29 07:21:30.7390662 | 15.4364 |\n| 660 | 2024-07-29 07:21:30.7701262 | 2024-07-29 07:21:30.7545026 | 15.6236 |\n| 667 | 2024-07-29 07:21:30.7857507 | 2024-07-29 07:21:30.7701262 | 15.6245 |\n| 668 | 2024-07-29 07:21:30.8013755 | 2024-07-29 07:21:30.7857507 | 15.6248 |\n| 664 | 2024-07-29 07:21:30.8169996 | 2024-07-29 07:21:30.8013755 | 15.6241 |\n| 631 | 2024-07-29 07:21:30.8326241 | 2024-07-29 07:21:30.8169996 | 15.6245 |\n| 660 | 2024-07-29 07:21:30.8482510 | 2024-07-29 07:21:30.8326241 | 15.6269 |\n| 662 | 2024-07-29 07:21:30.8638744 | 2024-07-29 07:21:30.8482510 | 15.6234 |\n| 670 | 2024-07-29 07:21:30.8795009 | 2024-07-29 07:21:30.8638744 | 15.6265 |\n| 185 | 2024-07-29 07:21:30.8951255 | 2024-07-29 07:21:30.8795009 | 15.6246 |"}
{"questionId":"134de265f8979a0613bb3513057e481532ffa4d78e7d9c1b5363128461947aa5","question":"Efficiently find the number of different classmates from course-level data\nI have been stuck with computing efficiently the number of classmates for each student from a course-level database.\n\n\nConsider this data.frame, where each row represents a course that a student has taken during a given semester:\n\n\n\n```\ndat <- \n data.frame(\n student = c(1, 1, 2, 2, 2, 3, 4, 5),\n semester = c(1, 2, 1, 2, 2, 2, 1, 2),\n course = c(2, 4, 2, 3, 4, 3, 2, 4)\n)\n\n# student semester course\n# 1 1 1 2\n# 2 1 2 4\n# 3 2 1 2\n# 4 2 2 3\n# 5 2 2 4\n# 6 3 2 3\n# 7 4 1 2\n# 8 5 2 4\n\n```\n\nStudents are going to courses in a given semester. Their classmates are other students attending the same course during the same semester. For instance, across both semesters, student 1 has 3 classmates (students 2, 4 and 5).\n\n\nHow can I get the number of *unique* classmates each student has combining both semesters? The desired output would be:\n\n\n\n```\n student n\n1 1 3\n2 2 4\n3 3 1\n4 4 2\n5 5 2\n\n```\n\nwhere `n` is the value for the number of different classmates a student has had during the academic year.\n\n\nI sense that an `igraph` solution could possibly work (hence the tag), but my knowledge of this package is too limited. I also feel like using `joins` could help, but again, I am not sure how.\n\n\nImportantly, I would like this to work for larger datasets (mine has about 17M rows). Here's an example data set:\n\n\n\n```\nset.seed(1)\nbig_dat <- \n data.frame(\n student = sample(1e4, 1e6, TRUE),\n semester = sample(2, 1e6, TRUE),\n course = sample(1e3, 1e6, TRUE)\n )","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"First try with `igraph`:\n\n\n\n```\nlibrary(data.table)\nlibrary(igraph)\n\nsetDT(dat)\ni <- max(dat$student)\ng <- graph_from_data_frame(\n dat[,.(student, class = .GRP + i), .(semester, course)][,-1:-2]\n)\nv <- V(g)[1:uniqueN(dat$student)]\ndata.frame(student = as.integer(names(v)),\n n = ego_size(g, 2, v, mindist = 2))\n#> student n\n#> 1 1 3\n#> 2 2 4\n#> 3 4 2\n#> 4 5 2\n#> 5 3 1\n\n```\n\nNote that if `student` is not integer, you'll need to create a temporary integer id with `match` on the unique value and then index on the final output.\n\n\nWith `tcrossprod`:\n\n\n\n```\nlibrary(data.table)\nlibrary(Matrix)\n\nsetDT(dat)\nu <- unique(dat$student)\ndata.frame(\n student = u,\n n = colSums(\n tcrossprod(\n dat[,id := match(student, u)][\n ,.(i = id, j = .GRP), .(semester, course)\n ][,sparseMatrix(i, j)]\n )\n ) - 1L\n)\n#> student n\n#> 1 1 3\n#> 2 2 4\n#> 3 3 1\n#> 4 4 2\n#> 5 5 2"}
{"questionId":"2823f1c44388796f1c9f7c3617cd23f94c469477aef8e3f688936e58e149a30e","question":"How do you dump an object instance with Perl's new class feature?\nThe goal is to see the encapsulated data, like I've been doing for the last 26 years.\n\n\n\n```\nuse 5.040;\nuse strictures;\nuse experimental 'class';\n\nclass Foo {\n field @member = qw(e r t);\n}\n\nmy $foo = Foo->new;\n\n# use Data::Dumper qw(Dumper);\n# say Dumper $foo;\n# cannot handle ref type 16\n\n# use DDS; DumpLex $foo;\n# _dump_rv() can't handle 'OBJECT' objects yet\n\n# use Data::Dx; Dx $foo;\n# Can't handle OBJECT data\n\n# use DDP; p $foo;\n# Foo {\n# public methods (1): new\n# private methods (0)\n# internals: (opaque object)\n# }\n# \u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191\u2191","questionMetadata":{"type":"debugging","tag":"perl","level":"intermediate"},"answer":"The Tuple::Munge package on CPAN might help you.\n\n\n\n```\nuse v5.38;\n\nsub tuple_to_aref ( $t ) {\n use Tuple::Munge ();\n use experimental 'builtin';\n use builtin 'blessed';\n my $len = Tuple::Munge::tuple_length($t);\n my $aref = [ map Tuple::Munge::tuple_slot($t, $_-1), 1 .. $len ];\n if ( my $class = blessed $t ) {\n return bless( $aref, \"TUPLE_TO_ARRAY::$class\" );\n }\n return $aref;\n}\n\nuse experimental 'class';\nuse Data::Dumper;\n\nclass My::Class {\n field $x;\n ADJUST {\n $x = 'Foobar';\n }\n}\n\nmy $object = My::Class->new;\nprint Dumper( tuple_to_aref( $object ) );\n\n```\n\nSample output:\n\n\n\n```\n$VAR1 = bless( [\n \\'Foobar'\n ], 'TUPLE_TO_ARRAY::My::Class' );"}
{"questionId":"db3e8b852469c65b89a8326b9ec8c82c4c6e95f1a262a7e9fac4e5f6d9d0f8bc","question":"Looking for Regex pattern to return similar results to my current function\nI have some pascal-cased text that I'm trying to split into separate tokens\/words.\nFor example, `\"Hello123AIIsCool\"` would become `[\"Hello\", \"123\", \"AI\", \"Is\", \"Cool\"]`.\n\n\n# Some Conditions\n\n\n- \"Words\" will always start with an upper-cased letter. E.g., `\"Hello\"`\n- A contiguous sequence of numbers should be left together. E.g., `\"123\"` -> `[\"123\"]`, not `[\"1\", \"2\", \"3\"]`\n- A contiguous sequence of upper-cased letters should be kept together *except* when the last letter is the start to a new word as defined in the first condition. E.g., `\"ABCat\"` -> `[\"AB\", \"Cat\"]`, not `[\"ABC\", \"at\"]`\n- There is no guarantee that each condition will have a match in a string. E.g., `\"Hello\"`, `\"HelloAI\"`, `\"HelloAIIsCool\"` `\"Hello123\"`, `\"123AI\"`, `\"AIIsCool\"`, and any other combination I haven't provided are potential candidates.\n\n\nI've tried a couple regex variations. The following two attempts got me pretty close to what I want, but not quite.\n\n\n# Version 0\n\n\n\n```\nimport re\n\ndef extract_v0(string: str) -> list[str]:\n word_pattern = r\"[A-Z][a-z]*\"\n num_pattern = r\"\\d+\"\n pattern = f\"{word_pattern}|{num_pattern}\"\n extracts: list[str] = re.findall(\n pattern=pattern, string=string\n )\n return extracts\n\nstring = \"Hello123AIIsCool\"\nextract_v0(string)\n\n```\n\n\n```\n['Hello', '123', 'A', 'I', 'Is', 'Cool']\n\n```\n\n# Version 1\n\n\n\n```\nimport re\n\ndef extract_v1(string: str) -> list[str]:\n word_pattern = r\"[A-Z][a-z]+\"\n num_pattern = r\"\\d+\"\n upper_pattern = r\"[A-Z][^a-z]*\"\n pattern = f\"{word_pattern}|{num_pattern}|{upper_pattern}\"\n extracts: list[str] = re.findall(\n pattern=pattern, string=string\n )\n return extracts\n\nstring = \"Hello123AIIsCool\"\nextract_v1(string)\n\n```\n\n\n```\n['Hello', '123', 'AII', 'Cool']\n\n```\n\n# Best Option So Far\n\n\nThis uses a combination of regex and looping. It works, but is this the best solution? Or is there some fancy regex that can do it?\n\n\n\n```\nimport re\n\ndef extract_v2(string: str) -> list[str]:\n word_pattern = r\"[A-Z][a-z]+\"\n num_pattern = r\"\\d+\"\n upper_pattern = r\"[A-Z][A-Z]*\"\n groups = []\n for pattern in [word_pattern, num_pattern, upper_pattern]:\n while string.strip():\n group = re.search(pattern=pattern, string=string)\n if group is not None:\n groups.append(group)\n string = string[:group.start()] + \" \" + string[group.end():]\n else:\n break\n \n ordered = sorted(groups, key=lambda g: g.start())\n return [grp.group() for grp in ordered]\n\nstring = \"Hello123AIIsCool\"\nextract_v2(string)\n\n```\n\n\n```\n['Hello', '123', 'AI', 'Is', 'Cool']","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"Based on your Version 1:\n\n\n\n```\nimport re\n\n\ndef extract_v1(string: str) -> list[str]:\n word_pattern = r\"[A-Z][a-z]+\"\n num_pattern = r\"\\d+\"\n upper_pattern = r\"[A-Z]+(?![a-z])\" # Fixed\n pattern = f\"{word_pattern}|{num_pattern}|{upper_pattern}\"\n extracts: list[str] = re.findall(\n pattern=pattern, string=string\n )\n return extracts\n\n\nstring = \"Hello123AIIsCool\"\nextract_v1(string)\n\n```\n\nResult:\n\n\n\n```\n['Hello', '123', 'AI', 'Is', 'Cool']\n\n```\n\nThe fixed `upper_pattern` will match as many uppercased letters as possible, and will stop one before a lowercased letter if it exists."}
{"questionId":"ba806ab1ea69d5a5c607dd50c7d22d421004a671ba59d0f9118b0c7b1b917fd7","question":"Polars Replacing Values Greater than the Max of Another Polars DataFrame Within Groups\nI have 2 DataFrames:\n\n\n\n```\nimport polars as pl\n\ndf1 = pl.DataFrame(\n {\n \"group\": [\"A\", \"A\", \"A\", \"B\", \"B\", \"B\"],\n \"index\": [1, 3, 5, 1, 3, 8],\n }\n)\n\ndf2 = pl.DataFrame(\n {\n \"group\": [\"A\", \"A\", \"A\", \"B\", \"B\", \"B\"],\n \"index\": [3, 4, 7, 2, 7, 10],\n }\n)\n\n```\n\nI want to cap the `index` in `df2` using the **largest index** of each group in `df1`. The groups in two DataFrames are the same.\n\n\nexpected output for `df2`:\n\n\n\n```\nshape: (6, 2)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 group \u2506 index \u2502\n\u2502 --- \u2506 --- \u2502\n\u2502 str \u2506 i64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 A \u2506 3 \u2502\n\u2502 A \u2506 4 \u2502\n\u2502 A \u2506 5 \u2502\n\u2502 B \u2506 2 \u2502\n\u2502 B \u2506 7 \u2502\n\u2502 B \u2506 8 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"You can compute the max per group over df1, then [`clip`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/expressions\/api\/polars.Expr.clip.html) df2:\n\n\n\n```\nout = df2.with_columns(\n pl.col('index').clip(\n upper_bound=df1.select(pl.col('index').max().over('group'))['index']\n )\n)\n\n\n```\n\nOutput:\n\n\n\n```\nshape: (6, 2)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 group \u2506 index \u2502\n\u2502 --- \u2506 --- \u2502\n\u2502 str \u2506 i64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 A \u2506 3 \u2502\n\u2502 A \u2506 4 \u2502\n\u2502 A \u2506 5 \u2502\n\u2502 B \u2506 2 \u2502\n\u2502 B \u2506 7 \u2502\n\u2502 B \u2506 8 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\nAlternatively, if the two groups are not necessarily the same in both dataframes, you could [`group_by.max`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/dataframe\/api\/polars.dataframe.group_by.GroupBy.max.html) then align with [`join`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/dataframe\/api\/polars.DataFrame.join.html):\n\n\n\n```\ndf1 = pl.DataFrame(\n {\n \"group\": [\"A\", \"A\", \"A\", \"B\", \"B\", \"B\"],\n \"index\": [1, 3, 5, 1, 3, 7],\n }\n)\n\ndf2 = pl.DataFrame(\n {\n \"group\": [\"A\", \"A\", \"A\", \"B\", \"B\", \"B\", \"B\"],\n \"index\": [3, 4, 7, 2, 7, 8, 9],\n }\n)\n\nout = df2.with_columns(\n pl.col('index').clip(\n upper_bound=df2.join(df1.group_by('group').max(), on='group')['index_right']\n )\n)\n\n```\n\nOutput:\n\n\n\n```\nshape: (7, 2)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 group \u2506 index \u2502\n\u2502 --- \u2506 --- \u2502\n\u2502 str \u2506 i64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 A \u2506 3 \u2502\n\u2502 A \u2506 4 \u2502\n\u2502 A \u2506 5 \u2502\n\u2502 B \u2506 2 \u2502\n\u2502 B \u2506 7 \u2502\n\u2502 B \u2506 7 \u2502\n\u2502 B \u2506 7 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518"}
{"questionId":"07d218f4b4c14b89155e791ebe85955bb5dd6ed52384b38c761b02e56a1f7044","question":"What is the best way to filter groups by two lambda conditions and create a new column based on the conditions?\nThis is my DataFrame:\n\n\n\n```\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\n 'a': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'z', 'z', 'z', 'p', 'p', 'p', 'p'],\n 'b': [1, -1, 1, 1, -1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1]\n }\n)\n\n```\n\nAnd this the expected output. I want to create column `c`:\n\n\n\n```\n a b c\n0 x 1 first\n1 x -1 first\n2 x 1 first\n3 x 1 first\n4 y -1 second\n5 y 1 second\n6 y 1 second\n7 y -1 second\n11 p 1 first\n12 p 1 first\n13 p 1 first\n14 p 1 first\n\n```\n\nGroups are defined by column `a`. I want to filter `df` and choose groups that either their first `b` is 1 OR their second `b` is 1.\n\n\nI did this by this code:\n\n\n\n```\ndf1 = df.groupby('a').filter(lambda x: (x.b.iloc[0] == 1) | (x.b.iloc[1] == 1))\n\n```\n\nAnd for creating column `c` for `df1`, again groups should be defined by `a` and then if for each group first `b` is 1 then `c` is `first` and if the second `b` is 1 then `c` is `second`.\n\n\nNote that for group `p`, both first and second `b` is 1, for these groups I want `c` to be `first`.\n\n\nMaybe the way that I approach the issue is totally wrong.","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"A generic method that works with any number of positions for the first `1`:\n\n\n\n```\nd = {0: 'first', 1: 'second'}\n\ns = (df.groupby('a')['b']\n .transform(lambda g: g.reset_index()[g.values==1]\n .first_valid_index())\n .replace(d)\n )\n\nout = df.assign(c=s).dropna(subset=['c'])\n\n```\n\nNotes:\n\n\n- if you remove the `replace` step you will get an integer in `c`\n- if you use `map` in place of `replace` you can ignore the positions that are not defined as a dictionary key\n\n\nOutput:\n\n\n\n```\n a b c\n0 x 1 first\n1 x -1 first\n2 x 1 first\n3 x 1 first\n4 y -1 second\n5 y 1 second\n6 y 1 second\n7 y -1 second\n11 p 1 first\n12 p 1 first\n13 p 1 first\n14 p 1 first\n\n```\n\nExample from comments:\n\n\n\n```\ndf = pd.DataFrame({'a': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'z', 'z', 'z', 'p', 'p', 'p', 'p'],\n 'b': [1, -1, 1, 1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1]})\n\nd = {0: 'first', 1: 'second'}\n\ns = (df.groupby('a')['b']\n .transform(lambda g: g.reset_index()[g.values==1]\n .first_valid_index())\n .map(d)\n )\n\nout = df.assign(c=s).dropna(subset=['c'])\n\n a b c\n0 x 1 first\n1 x -1 first\n2 x 1 first\n3 x 1 first\n4 y -1 second\n5 y 1 second\n6 y 1 second\n7 y -1 second\n11 p 1 first\n12 p 1 first\n13 p 1 first\n14 p 1 first\n\n```\n\nYou can also only filter the rows with:\n\n\n\n```\nm1 = df.groupby('a').cumcount().le(1)\nm2 = df['b'].eq(1)\nout = df.loc[df['a'].isin(df.loc[m1&m2, 'a'])]"}
{"questionId":"7ffc8cd42796feaf397de2b49cf666aa645e3ed8aa5d688c10d64291dd7893f3","question":"Can you declare a C pointer with its own address?\nCan I declare a C pointer with an intialization to its own address?\n\n\n`void* p = &p;`\n\n\nI am specifically concerned with if this is strictly standard compliant in C23 (as the draft currently stands).\n\n\nI assume this question is equivalent to if the following works, which is closer to my actual use case:\n\n\n\n```\ntypedef struct st {\n void* p;\n } st;\n\nst x = { .p = &x };","questionMetadata":{"type":"version","tag":"c","level":"intermediate"},"answer":"It should be fine. I can't think of any part of the standard making this invalid and C23 didn't bring anything new that changed this either.\n\n\nThe most relevant part of the standard would be 6.6 regarding address constants used as initializers (from the C23 draft N3096):\n\n\n\n> \n> An *address constant* is a null pointer, a pointer to an lvalue designating an object of static storage\n> duration, or a pointer to a function designator; it shall be created explicitly using the unary `&`\n> operator or an integer constant cast to pointer type, or implicitly using an expression of array or\n> function type.\n> \n> \n> The array-subscript `[]` and member-access `->` operator, the address `&` and indirection `*` unary operators, and pointer casts may be used in the creation of an address constant, but the value of an object shall not be accessed by use of these operators.\n> \n> \n> \n\n\nNew in C23 is the following, mostly related to `constexpr` situations and compound literals:\n\n\n\n> \n> *A structure or union constant* is a named constant or compound literal constant with structure or union type, respectively.\n> \n> \n> An implementation may accept other forms of constant expressions; however, they are not an integer\n> constant expression.\n> \n> \n> Starting from a structure or union constant, the member-access `.` operator may be used to form a named constant or compound literal constant as described above. \n> \n> If the member-access operator `.` accesses a member of a union constant, the accessed member shall be the same as the member that is initialized by the union constant\u2019s initializer.\n> \n> \n> \n\n\nFrom a practical point of view beyond the C standard, any variable we declare ought to already have a memory location before we initialize it, or otherwise how would the program know where to store that initializer? (Using the `&` operator also means it can't be stored in a register.) The C standard is purposely vague when it comes to where\/how variables are stored in memory, so it won't cover things like linker addresses, stack offsets and the like."}
{"questionId":"b11f86fe042ee6a58b133fe8deac13c6328d20748d47e79e8b8871532f0759f3","question":"Deprecation Warnings in Flutter After Upgrading\nAfter upgrading Flutter, I encountered the following two warnings related to my index.html file:\n\n\n\n```\nWarning: In index.html:37: Local variable for \"serviceWorkerVersion\" is deprecated. Use \"{{flutter_service_worker_version}}\" template token instead.\nWarning: In index.html:46: \"FlutterLoader.loadEntrypoint\" is deprecated. Use \"FlutterLoader.load\" instead.\n\n```\n\nThese warnings indicate that certain elements in my index.html file are using deprecated methods. Here is my current index.html file:\n\n\n\n```\n <!DOCTYPE html>\n <html>\n <head>\n <!--\n If you are serving your web app in a path other than the root, change the\n href value below to reflect the base path you are serving from.\n \n The path provided below has to start and end with a slash \"\/\" in order for\n it to work correctly.\n \n For more details:\n * https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTML\/Element\/base\n \n This is a placeholder for base href that will be replaced by the value of\n the `--base-href` argument provided to `flutter build`.\n -->\n <base href=\"$FLUTTER_BASE_HREF\">\n \n <meta charset=\"UTF-8\">\n <meta content=\"IE=Edge\" http-equiv=\"X-UA-Compatible\">\n <meta name=\"description\" content=\"chapter 04\">\n \n <!-- iOS meta tags & icons -->\n <meta name=\"apple-mobile-web-app-capable\" content=\"yes\">\n <meta name=\"apple-mobile-web-app-status-bar-style\" content=\"black\">\n <meta name=\"apple-mobile-web-app-title\" content=\"flutter_layout\">\n <link rel=\"apple-touch-icon\" href=\"icons\/Icon-192.png\">\n \n <!-- Favicon -->\n <link rel=\"icon\" type=\"image\/png\" href=\"favicon.png\"\/>\n \n <title>flutter_layout<\/title>\n <link rel=\"manifest\" href=\"manifest.json\">\n \n <script>\n \/\/ The value below is injected by flutter build, do not touch.\n const serviceWorkerVersion = null;\n <\/script>\n <!-- This script adds the flutter initialization JS code -->\n <script src=\"flutter.js\" defer><\/script>\n <\/head>\n <body>\n <script>\n window.addEventListener('load', function(ev) {\n \/\/ Download main.dart.js\n _flutter.loader.loadEntrypoint({\n serviceWorker: {\n serviceWorkerVersion: serviceWorkerVersion,\n },\n onEntrypointLoaded: function(engineInitializer) {\n engineInitializer.initializeEngine().then(function(appRunner) {\n appRunner.runApp();\n });\n }\n });\n });\n <\/script>\n <\/body>\n <\/html>\n\n```\n\n1. How do I replace the `serviceWorkerVersion` variable correctly using the `{{flutter_service_worker_version}}` template token?\n2. How do I update the `FlutterLoader.loadEntrypoint` method to use the new `FlutterLoader.load` method?\n\n\nI appreciate any guidance or examples on how to resolve these deprecation warnings. Thank you!","questionMetadata":{"type":"version","tag":"dart","level":"intermediate"},"answer":"## In `web\/index.html`\n\n\n#### 1. Replace the serviceWorkerVersion\n\n\n**From**\n\n\n\n```\nconst serviceWorkerVersion = null;\n\n```\n\n**To**\n\n\n\n```\nconst serviceWorkerVersion = {{flutter_service_worker_version}};\n\n```\n\n#### 2. To fix the *FlutterLoader.loadEntrypoint* warning, replace body\n\n\n**From**\n\n\n\n```\n<body>\n<script>\n window.addEventListener('load', function(ev) {\n \/\/ Download main.dart.js\n _flutter.loader.loadEntrypoint({\n serviceWorker: {\n serviceWorkerVersion: serviceWorkerVersion,\n },\n onEntrypointLoaded: function(engineInitializer) {\n engineInitializer.initializeEngine().then(function(appRunner) {\n appRunner.runApp();\n });\n }\n });\n });\n<\/script>\n<\/body>\n\n```\n\n**To**\n\n\n\n```\n<body>\n<script src=\"flutter_bootstrap.js\" async><\/script>\n<\/body>\n\n```\n\n## Learn more\n\n\n<https:\/\/docs.flutter.dev\/platform-integration\/web\/bootstrapping>"}
{"questionId":"b666e231a024a813c790ec6080b9b072ee421b13ac017f1be21cd97ac9998919","question":"ASP.NET Core 8.0 JWT Validation Issue: SecurityTokenNoExpirationException Despite Valid Token\nI'm working on an ASP.NET Core 8.0 Web API project with a layered architecture. I've encountered an issue during the JWT validation process in my authentication and registration endpoints.\n\n\nIn my `program.cs`, I have the following token validation parameters set up:\n\n\n\n```\nvar tokenValidationParameters = new TokenValidationParameters()\n {\n ValidateIssuer = true,\n ValidateAudience = true,\n ValidAudience = jwtSettings.ValidAudience,\n ValidIssuer = jwtSettings.ValidIssuer,\n IssuerSigningKey = new \n SymmetricSecurityKey(Encoding.ASCII.GetBytes(jwtSettings.Secret)),\n ClockSkew = jwtSettings.TokenLifetime\n };\n\nbuilder.Services.AddAuthentication(options =>\n{\n options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;\n options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;\n options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;\n})\n .AddJwtBearer(options =>\n {\n options.SaveToken = true;\n options.RequireHttpsMetadata = false;\n options.TokenValidationParameters = tokenValidationParameters;\n \n options.Events = new JwtBearerEvents\n {\n OnMessageReceived = context =>\n {\n context.Token = context.Request.Cookies[\"authorization\"];\n return Task.CompletedTask;\n }\n };\n });\n\nbuilder.Services.AddSingleton(tokenValidationParameters);\n\n```\n\nFor token generation, I use the following method:\n\n\n\n```\npublic async Task<Response<RefreshTokenDto>> GenerateAuthResultForCustomAsync(Customer \ncustomer)\n{\n try\n {\n var tokenHandler = new JwtSecurityTokenHandler();\n var key = Encoding.ASCII.GetBytes(_jwtSettings.Secret);\n\n var userRoles = await _userManager.GetRolesAsync(customer);\n\n var authClaims = new List<Claim>\n {\n new Claim(JwtRegisteredClaimNames.Sub, customer.Email),\n new Claim(JwtRegisteredClaimNames.Jti, \nGuid.NewGuid().ToString()),\n new Claim(\"customerId\", customer.Id),\n new Claim(\"firstName\", customer.FirstName),\n new Claim(\"lastName\", customer.LastName),\n new Claim(\"countryId\", customer.CountryId.ToString()),\n new Claim(\"phoneNumber\", customer.PhoneNumber),\n new Claim(\"userName\", customer.UserName)\n };\n\n authClaims.AddRange(userRoles.Select(role => new Claim(ClaimTypes.Role, role)));\n\n var tokenDescriptor = new SecurityTokenDescriptor()\n {\n Subject = new ClaimsIdentity(authClaims),\n Issuer = _jwtSettings.ValidIssuer,\n Audience = _jwtSettings.ValidAudience,\n Expires = _dateTimeProvider.Now.Add(_jwtSettings.TokenLifetime).UtcDateTime,\n SigningCredentials = new SigningCredentials(\n new SymmetricSecurityKey(key),\n SecurityAlgorithms.HmacSha256Signature)\n };\n\n var token = tokenHandler.CreateToken(tokenDescriptor);\n\n}\n\n```\n\nLastly, the method that validates the token is:\n\n\n\n```\npublic class PrincipalTokenService : IPrincipalTokenService\n{\n private readonly TokenValidationParameters _tokenValidationParameters;\n\n public PrincipalTokenService(TokenValidationParameters tokenValidationParameters)\n {\n _tokenValidationParameters = tokenValidationParameters;\n }\n\n public ClaimsPrincipal GetPrincipalFromToken(string token)\n {\n var tokenHandler = new JwtSecurityTokenHandler();\n\n try\n {\n var handler = tokenHandler.ValidateToken(\n token,\n _tokenValidationParameters,\n out var validatedToken);\n\n return !IsJwtWithValid(validatedToken) ? null : handler;\n }\n catch\n {\n return null;\n }\n }\n\n```\n\n}\n\n\nWhen executing `GetPrincipalFromToken`, I get the following error:\n\n\n\n> \n> Microsoft.IdentityModel.Tokens.SecurityTokenNoExpirationException: 'IDX10225: Lifetime validation failed. The token is missing an Expiration Time. Tokentype:\n> 'System.IdentityModel.Tokens.Jwt.JwtSecurityToken'\n> \n> \n> \n\n\nHowever, removing the expiration date from the token and disabling its validation in the configuration leads to similar issues with the issuer, and subsequently with the audience. Despite these validations, the generated token is valid as confirmed by jwt.io.\n\n\nThe entire token generation and validation logic was ported from a working project in ASP.NET Core 6.0, which has been functioning without issues for over 3 years. The server environment and configuration have remained unchanged, suggesting the code itself is not the issue.\n\n\nI suspect the problem might be related to package versions. Here are the relevant packages used in the 8.0 project:\n\n\n\n```\n<PackageReference Include=\"Asp.Versioning.Mvc\" Version=\"8.0.0\" \/>\n<PackageReference Include=\"Microsoft.AspNetCore.Authentication.JwtBearer\" Version=\"8.0.3\" \/>\n<PackageReference Include=\"Microsoft.AspNetCore.Mvc.NewtonsoftJson\" Version=\"8.0.3\" \/>\n<PackageReference Include=\"Microsoft.EntityFrameworkCore\" Version=\"8.0.3\" \/>\n<PackageReference Include=\"Microsoft.EntityFrameworkCore.Proxies\" Version=\"8.0.3\" \/>\n<PackageReference Include=\"Microsoft.EntityFrameworkCore.SqlServer\" Version=\"8.0.3\" \/>\n<PackageReference Include=\"Microsoft.EntityFrameworkCore.Tools\" Version=\"8.0.3\">\n<PackageReference Include=\"Microsoft.AspNet.WebApi.Client\" Version=\"6.0.0\" \/>\n<PackageReference Include=\"Microsoft.AspNetCore.Identity.EntityFrameworkCore\" Version=\"8.0.3\" \/>\n<PackageReference Include=\"Microsoft.IdentityModel.Tokens\" Version=\"7.4.1 \/>\n\n```\n\nHas anyone else encountered similar issues with JWT validation in ASP.NET Core 8.0? Any insights or suggestions would be greatly appreciated.\n\n\nP.S. Additionally, I'd like to clarify that the jwtSettings is a model used to populate the tokenValidationParameters with data from the application settings. The generated token, when verified on jwt.io, appears correct in all respects - the exp field contains the correct time, the iss field is present, and all the data within the token is accurate. I've even attempted to bypass the jwtSettings model entirely by hardcoding the parameters directly into tokenValidationParameters, but I encounter the same issue, which leads me to suspect that the problem might be within the package handling ValidateToken itself.","questionMetadata":{"type":"version","tag":"c#","level":"intermediate"},"answer":"Remove the package `Microsoft.IdentityModel.Tokens\" Version=\"7.4.1`, then rebuild the project."}
{"questionId":"2e755188e486b79a1f3fe079cb7801a060dd5a398f8c7b2da6d936bb67da7e4b","question":"Need to understand map.get() method after Overriding hashCode and equals in Java\nI have overridden the `hashCode` and `equals` methods as below, and I want to understand the implementation of Map get method.\n\n\n\n```\npublic class Student{\n String name;\n\n Student(String s){\n this.name = s;\n }\n\n @Override\n public boolean equals(Object o) {\n return false;\n }\n\n @Override\n public int hashCode() {\n return 111;\n }\n\n public static void main(String[] args) {\n Map<Student,String> map=new HashMap<>();\n Student ob1=new Student(\"A\");\n Student ob2=new Student(\"B\");\n map.put(ob1,\"A\");\n map.put(ob2,\"B\");\n System.out.println(map.get(ob1));\n }\n}\n\n```\n\nI tried running `map.get()` expecting `null` result because the key will never be found because the `equals()` method will always return false but I am getting the result as `A` in this case.","questionMetadata":{"type":"conceptual","tag":"java","level":"intermediate"},"answer":"HashMap`'s `get` [checks for equality with `==` before using `equals`](https:\/\/github.com\/openjdk\/jdk\/blob\/6f7f0f1de05fdc0f6a88ccd90b806e8a5c5074ef\/src\/java.base\/share\/classes\/java\/util\/HashMap.java#L579).\n\n\nSo the fact that you're using the same object you used as a key (rather than an object with the same content but a different reference) makes `get` work.\n\n\nIf you try this way\n\n\n\n```\npublic static void main(String[] args) {\n Map<Student,String> map=new HashMap<>();\n Student ob1=new Student(\"A\");\n Student ob2=new Student(\"B\");\n Student keyTest = new Student(\"A\");\n map.put(ob1,\"A\");\n map.put(ob2,\"B\");\n System.out.println(map.get(keyTest)); \/\/different key here\n}\n\n```\n\nit prints `null`."}
{"questionId":"7fea954144ae85d95d431313db5fe04da549fc1caccb96f4e34e54394a03c9e4","question":"typescript error using @material-tailwind\/react with nextjs14\ntrying to use \"@material-tailwind\/react\": \"^2.1.9\" in \"next\": \"14.1.4\"\n\n\n\n```\n\"use client\";\nimport { Button } from \"@material-tailwind\/react\";\n\nexport default function Home() {\n return <Button>Test MUI<\/Button>;\n}\n\n```\n\nbut the button is showing a red **squiggly line** with error\n\n\n\n```\nType '{ children: string; }' is missing the following properties from type 'Pick<ButtonProps, \"children\" | \"color\" | \"disabled\" | \"translate\" | \"form\" | \"slot\" | \"style\" | \"title\" | \"onChange\" | \"onClick\" | \"className\" | \"value\" | \"key\" | \"autoFocus\" | ... 259 more ... | \"loading\">': placeholder, onPointerEnterCapture,","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"I have the same issue working on React with Vite + Typescript, i'm using Node 20.12.1.\n\n\nI found the solution here: <https:\/\/github.com\/creativetimofficial\/material-tailwind\/issues\/528> the problem is in the new versions of @react\/types package.\n\n\nIf you downgrade should be fixed. do this:\n\n\n1. Delete your node\\_modules folder and package-lock.json\n2. Manually, on your package.json, replace the version of @types\/react to \"18.2.42\" (Be aware to not include the ^ symbol, if you keep it will not work).\n3. reinstall the dependencies with npm install.\n\n\nI also read some cases where 18.2.42 won't work, if happens to you try the 18.2.19 version."}
{"questionId":"8625902d3b427dbbf8b3f10c7d529e3de8e6ba281b8fcd9a372afba7eeca3d89","question":"Swift 6 Error with Non-Isolated Global Shared Mutable State in EnvironmentKey\nI've encountered a concurrency safety issue in Swift 5.10 StrictConcurrency mode that I'm struggling to resolve. I'm working with an `EnvironmentKey` structure that includes a static property defined as an asynchronous closure returning an optional custom actor. Here is the simplified code snippet:\n\n\n\n```\nstruct DataHandlerKey: EnvironmentKey {\n static var defaultValue: @Sendable () async -> Hello? = { nil }\n}\n\nactor Hello {}\n\n```\n\nThe `defaultValue` is a closure marked with `@Sendable` that asynchronously returns an optional `Hello` actor instance. However, Swift 5.10 with StrictConcurrency compiler raises a concurrency safety error, stating:\n\n\n\n```\nStatic property 'defaultValue' is not concurrency-safe because it is non-isolated global shared mutable state; this is an error in Swift 6.\n\n```\n\nI understand that the issue is related to the static property potentially introducing non-isolated global shared mutable state, but I'm unsure how to adjust my code to adhere to Swift 6's enhanced concurrency safety requirements. The `Hello` actor is designed to be concurrency-safe, yet I'm unable to use it as intended in this context.\n\n\nAny insights, suggestions, or references to relevant documentation would be greatly appreciated. Thank you in advance for your help!","questionMetadata":{"type":"version","tag":"swift","level":"intermediate"},"answer":"defaultValue` should be a `let` (or a computed property). It doesn't need to be a `var`:\n\n\n\n```\nstatic let defaultValue: @Sendable () async -> Hello? = { nil }\n\n```\n\nIf it is a `var`, any code from any thread can assign to it and change its value, and since it is not isolated to an actor, this can cause data races.\n\n\nWhat allows you to change environment values is not that `defaultValue` is a `var`, but that the `EnvironmentValues` property has a setter.\n\n\n\n```\nextension EnvironmentValues {\n var dataHandler: @Sendable () async -> Hello? {\n get { self[DataHandlerKey.self] }\n \/\/ this setter is what allows you to chang the environment\n set { self[DataHandlerKey.self] = newValue }\n }\n}"}
{"questionId":"2b8c64f07c4039aaccba5baf77fced8fe599cc6f750e53bfeed7d4e0e27a6c54","question":"What exactly is slowing np.sum down?\nIt is known that np.sum(arr) is quite a lot slower than arr.sum(). For example:\n\n\n\n```\nimport numpy as np\nnp.random.seed(7)\nA = np.random.random(1000)\n%timeit np.sum(A)\n2.94 \u00b5s \u00b1 13.8 ns per loop (mean \u00b1 std. dev. of 7 runs, 100,000 loops each)\n%timeit A.sum()\n1.8 \u00b5s \u00b1 40.8 ns per loop (mean \u00b1 std. dev. of 7 runs, 1,000,000 loops each)\n\n```\n\nCan anyone give a detailed code-based explanation of what np.sum(arr) is doing that arr.sum() is not?\n\n\nThe difference is insignificant for much longer arrays. But it is relatively significant for arrays of length 1000 or less, for example.\n\n\nIn my code I do millions of array sums so the difference is particularly significant.","questionMetadata":{"type":"optimization","tag":"python","level":"intermediate"},"answer":"When I run np.sum(a) in debug mode on my PC, it steps into the following code.\n\n\n<https:\/\/github.com\/numpy\/numpy\/blob\/v1.26.5\/numpy\/core\/fromnumeric.py#L2178>\n\n\nThe following is the part of the code where it is relevant.\n\n\n\n```\nimport numpy as np\nimport types\n\n\ndef _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs):\n passkwargs = {k: v for k, v in kwargs.items()\n if v is not np._NoValue}\n\n if type(obj) is not np.ndarray:\n raise NotImplementedError\n\n return ufunc.reduce(obj, axis, dtype, out, **passkwargs)\n\n\ndef copied_np_sum(a, axis=None, dtype=None, out=None, keepdims=np._NoValue, initial=np._NoValue, where=np._NoValue):\n if isinstance(a, types.GeneratorType):\n raise NotImplementedError\n\n return _wrapreduction(\n a, np.add, 'sum', axis, dtype, out, keepdims=keepdims,\n initial=initial, where=where\n )\n\n```\n\nNote that this ends up calling `np.add.reduce(a)`.\n\n\nBenchmark:\n\n\n\n```\nimport timeit\n\n\ndef benchmark(setup, stmt, repeat, number):\n print(f\"{stmt:16}: {min(timeit.repeat(setup=setup, stmt=stmt, globals=globals(), repeat=repeat, number=number)) \/ number}\")\n\n\nn_item = 10 ** 3\nn_loop = 1000\nn_set = 1000\n\ndata_setup = f\"\"\"\\\nimport numpy as np\nrng = np.random.default_rng(0)\na = rng.random({n_item})\n\"\"\"\n\nbenchmark(setup=data_setup, stmt=\"np.sum(a)\", repeat=n_set, number=n_loop)\nbenchmark(setup=data_setup, stmt=\"a.sum()\", repeat=n_set, number=n_loop)\nbenchmark(setup=data_setup, stmt=\"copied_np_sum(a)\", repeat=n_set, number=n_loop)\nbenchmark(setup=data_setup, stmt=\"np.add.reduce(a)\", repeat=n_set, number=n_loop)\n\n```\n\n\n```\nnp.sum(a) : 2.6407251134514808e-06\na.sum() : 1.3474803417921066e-06\ncopied_np_sum(a): 2.50667380169034e-06\nnp.add.reduce(a): 1.195137854665518e-06\n\n```\n\nAs you can see, `copied_np_sum` performs similarly to `np.sum`, and `np.add.reduce` is similar to `a.sum`.\nSo the majority of the difference between `np.sum` and `a.sum` is likely due to what `copied_np_sum` does before calling `np.add.reduce`.\nIn other words, it's the overhead caused by the dict comprehension and the additional function calls.\n\n\nHowever, although there is a significant difference in the above benchmark that reproduces the OP's one, as pointed out in the [comment](https:\/\/stackoverflow.com\/questions\/78626515\/what-exactly-is-slowing-np-sum-down\/78626678?noredirect=1#comment138620975_78626678), this may be overstated.\nBecause timeit repeatedly executes the code and uses the (best of) average, with a small array like in this benchmark, the array may already be in the CPU cache when it is measured.\nThis is not necessarily an unfair condition. The same thing could happen in actual use. Rather, it should be so whenever possible.\nThat being said, for a canonical answer, we should measure it.\n\n\nBased on @user3666197 advice, we can create a large array immediately after creating `a` to evicts `a` from the cache.\nNote that I decided to use `np.arange` here, which I confirmed has the same effect but runs faster.\n\n\n\n```\nimport timeit\n\n\ndef benchmark(setup, stmt, repeat, number):\n print(f\"{stmt:16}: {min(timeit.repeat(setup=setup, stmt=stmt, globals=globals(), repeat=repeat, number=number)) \/ number}\")\n\n\nn_item = 10 ** 3\nn_loop = 1\nn_set = 100\n\ndata_setup = f\"\"\"\\\nimport numpy as np\nrng = np.random.default_rng(0)\na = rng.random({n_item})\n_ = np.arange(10 ** 9, dtype=np.uint8) # To evict `a` from the CPU cache.\n\"\"\"\n\nbenchmark(setup=data_setup, stmt=\"np.sum(a)\", repeat=n_set, number=n_loop)\nbenchmark(setup=data_setup, stmt=\"a.sum()\", repeat=n_set, number=n_loop)\nbenchmark(setup=data_setup, stmt=\"copied_np_sum(a)\", repeat=n_set, number=n_loop)\nbenchmark(setup=data_setup, stmt=\"np.add.reduce(a)\", repeat=n_set, number=n_loop)\n\n```\n\nWithout eviction (With cache):\n\n\n\n```\nnp.sum(a) : 2.6407251134514808e-06\na.sum() : 1.3474803417921066e-06\ncopied_np_sum(a): 2.50667380169034e-06\nnp.add.reduce(a): 1.195137854665518e-06\n\n```\n\nWith eviction (Without cache):\n\n\n\n```\nnp.sum(a) : 4.916824400424957e-05\na.sum() : 3.245798870921135e-05\ncopied_np_sum(a): 4.7205016016960144e-05\nnp.add.reduce(a): 3.0195806175470352e-05\n\n```\n\nNaturally, the presence or absence of cache makes a huge impact on performance.\nHowever, although the difference has become smaller, it can still be said to be a significant difference.\nAlso, since these four relationships remain the same as before, the conclusion also remains the same.\n\n\nThere are a few things I should add.\n\n\n### Note1\n\n\nThe claim regarding method loading is incorrect.\n\n\n\n```\nbenchmark(setup=f\"{data_setup}f = np.sum\", stmt=\"f(a)\", repeat=n_set, number=n_loop)\nbenchmark(setup=f\"{data_setup}f = a.sum\", stmt=\"f()\", repeat=n_set, number=n_loop)\n\n```\n\n\n```\nnp.sum(a) : 4.916824400424957e-05\na.sum() : 3.245798870921135e-05\nf(a) : 4.6479981392621994e-05 <-- Same as np.sum.\nf() : 3.27317975461483e-05 <-- Same as a.sum.\nnp.add.reduce(a): 3.0195806175470352e-05 <-- Also, note that this one is fast.\n\n```\n\n### Note2\n\n\nAs all benchmarks show, `np.add.reduce` is the fastest (least overhead). If your actual application also deals only with 1D arrays, and such a small difference is important to you, you should consider using `np.add.reduce`.\n\n\n### Note3\n\n\nActually, numba may be the fastest in this case.\n\n\n\n```\nfrom numba import njit\nimport numpy as np\nimport math\n\n\n@njit(cache=True)\ndef nb_numpy_sum(a):\n # This will be a reduce sum.\n return np.sum(a)\n\n\n@njit(cache=True)\ndef nb_pairwise_sum(a):\n # https:\/\/en.wikipedia.org\/wiki\/Pairwise_summation\n N = 2\n if len(a) <= N:\n return np.sum(a) # reduce sum\n else:\n m = len(a) \/\/ 2\n return nb_pairwise_sum(a[:m]) + nb_pairwise_sum(a[m:])\n\n\n@njit(cache=True)\ndef nb_kahan_sum(a):\n # https:\/\/en.wikipedia.org\/wiki\/Kahan_summation_algorithm\n total = a.dtype.type(0.0)\n c = total\n for i in range(len(a)):\n y = a[i] - c\n t = total + y\n c = (t - total) - y\n total = t\n return total\n\n\ndef test():\n candidates = [\n (\"np.sum\", np.sum),\n (\"math.fsum\", math.fsum),\n (\"nb_numpy_sum\", nb_numpy_sum),\n (\"nb_pairwise_sum\", nb_pairwise_sum),\n (\"nb_kahan_sum\", nb_kahan_sum),\n ]\n\n n = 10 ** 7 + 1\n a = np.full(n, 0.1, dtype=np.float64)\n for name, f in candidates:\n print(f\"{name:16}: {f(a)}\")\n\n\ntest()\n\n```\n\nAccuracy:\n\n\n\n```\nnp.sum : 1000000.0999999782\nmath.fsum : 1000000.1000000001\nnb_numpy_sum : 1000000.0998389754\nnb_pairwise_sum : 1000000.1\nnb_kahan_sum : 1000000.1000000001\n\n```\n\nTiming:\n\n\n\n```\nnp.sum(a) : 4.7777313739061356e-05\na.sum() : 3.219071435928345e-05\nnp.add.reduce(a) : 2.9000919312238693e-05\nnb_numpy_sum(a) : 1.0361894965171814e-05\nnb_pairwise_sum(a): 1.4733988791704178e-05\nnb_kahan_sum(a) : 1.2937933206558228e-05\n\n```\n\nNote that although `nb_pairwise_sum` and `nb_kahan_sum` have mathematical accuracy comparable to NumPy, neither is intended to be an exact replica of NumPy's implementation.\nSo there is no guarantee that the results will be exactly the same as NumPy's.\n\n\nIt should also be clarified that this difference is due to the amount of overhead, and **NumPy is significantly faster for large arrays** (e.g. >10000).\n\n\n\n\n---\n\n\nThe following section was added after this answer was accepted. Below is an improved version of @J\u00e9r\u00f4meRichard's pairwise sum that sacrifices some accuracy for faster performance on larger arrays. See the comments for more details.\n\n\n\n```\nimport numba as nb\nimport numpy as np\n\n# Very fast function which should be inlined by LLVM.\n# The loop should be completely unrolled and designed so the SLP-vectorizer \n# could emit SIMD instructions, though in practice it does not...\[email protected](cache=True)\ndef nb_sum_x16(a):\n v1 = a[0]\n v2 = a[1]\n for i in range(2, 16, 2):\n v1 += a[i]\n v2 += a[i+1]\n return v1 + v2\n\[email protected](cache=True)\ndef nb_pairwise_sum(a):\n n = len(a)\n m = n \/\/ 2\n\n # Trivial case for tiny arrays\n if n < 16:\n return sum(a[:m]) + sum(a[m:])\n\n # Computation of a chunk (of 16~256 items) using an iterative \n # implementation so to reduce the overhead of function calls.\n if n <= 256:\n v = nb_sum_x16(a[0:16])\n i = 16\n # Main loop iterating on blocks (of exactly 16 items)\n while i + 15 < n:\n v += nb_sum_x16(a[i:i+16])\n i += 16\n return v + sum(a[i:])\n\n # OPTIONAL OPTIMIZATION: only for array with 1_000~100_000 items\n # Same logic than above but with bigger chunks\n # It is meant to reduce branch prediction issues with small \n # chunks by splitting them in equal size.\n if n <= 4096:\n v = nb_pairwise_sum(a[:256])\n i = 256\n while i + 255 < n:\n v += nb_pairwise_sum(a[i:i+256])\n i += 256\n return v + nb_pairwise_sum(a[i:])\n\n return nb_pairwise_sum(a[:m]) + nb_pairwise_sum(a[m:])"}
{"questionId":"540117a52b595f3d8cb75d579fd437089e4992a6974c53e749a6c48ebb0b0802","question":"node modules error: Type parameter 'OT' has a circular constraint\nGetting error while run: `ng serve`\n\n\nerror:\n\n\n\n```\nError: node_modules\/@ngrx\/effects\/src\/effect_creator.d.ts:12:43 - error TS2313: Type parameter 'OT' has a circular constraint.\n\n12 }, DT extends DispatchType<C>, OT extends ObservableType<DT, OT>, R extends EffectResult<OT>>(source: () => R & ConditionallyDisallowActionCreator<DT, R>, config?: C): R & CreateEffectMetadata;\n\n```\n\nhow to fix this error?\n\n\ni tried with\n\n\n`npm install @ngrx\/effects@latest @ngrx\/store@latest @ngrx\/store-devtools@latest rxjs@latest --save` \n\n\nbut still got error while run `ng serve` in angular project! my angular version is: `^17.1.3` and node vesrion `18.19.0","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"UPDATE: This issue is permanently solved by the new official update of NGRX v17.2.0. Here is the original issue about this: <https:\/\/github.com\/ngrx\/platform\/issues\/4275>\n\n\nI got exactly the same error, while I worked on the angular update on our project. Finally, it seems like the problem is with the latest Typescript v5.4.2. I just downgraded it to v5.2.2 and it works properly.\n\n\nI will do a more detailed investigation later, I just would like to let you know to solve your blocker issue right now."}
{"questionId":"b0a22764c7b389610578feea74df05af041f088df374390b02aa4f38956afe35","question":"\/usr\/bin\/env: \u2018node\u2019: Text file busy` Error after system update\nI'm working on Next.js Project from past 4 - 5 months and it was working fine till yesterday. But suddenly today I'm facing some issue.\n\n\nWhen I run `yarn dev` which runs command `npx nodemon index.ts` it gives the following error:\n\n\n\n```\nyarn run v1.22.22\n$ next dev\n\/usr\/bin\/env: \u2018node\u2019: Text file busy\nerror Command failed with exit code 126.\ninfo Visit https:\/\/yarnpkg.com\/en\/docs\/cli\/run for documentation about this command.\n\n```\n\nI did system update yesterday, maybe it's due to some kernel issues or the new node version have some bug?\n\n\nI'm on Arch Linux with node version `v22.2.0` and kernel version `6.9.2-arch1-1`. Thanks for reading it to here, any help will be appreciated.","questionMetadata":{"type":"version","tag":"javascript","level":"intermediate"},"answer":"This seems to be related to the latest kernel version, temporary workaround is to set `UV_USE_IO_URING=0` in your environment variables\n\n\nso either\n\n\n\n```\nUV_USE_IO_URING=0 yarn dev\n\n```\n\nor\n\n\n\n```\nexport UV_USE_IO_URING=0\nyarn dev\n\n```\n\nreference: <https:\/\/github.com\/nodejs\/node\/issues\/48444>"}
{"questionId":"e9f9ae1568d0258f01551714b77b22fcdb76c02fb680d4b8451c8d7a67d2fd23","question":"Is it standard C17 to wrap a parameter in a function declaration in parenthesis\nIs the following a standard C function declaration according to ISO\/IEC 9899:2017 (c17)?\n\n\n\n```\nint foo(int (bar), int (baz));\n\n```\n\nIf so, please point me to the section in the standard that defines this.\n\n\nIn N2310 Appendix Phrase structure grammar, A.2.2 Declarations, Section 6.7.6, I see the following:\n\n\n\n```\nparameter-list:\n parameter-declaration\n parameter-list , parameter-declaration\n\n```\n\nI'm not familiar with this type of grammar expression, so I'm not sure how to interpret it.\n\n\nThe following program compiles without errors with `gcc --std=c17 -Wall` and `clang --std=c17 -Wall`\n\n\n\n```\nstatic int foo(int (bar), int (baz));\nstatic int foo(int bar, int baz)\n{\n return bar + baz;\n}\nint main() {\n return foo(1, 2);\n}\n\n```\n\nHowever if I run `cppcheck` (a static analysis tool) on this program, it appears to parse incorrectly.\n\n\nI'm most interested if this grammar is standard C, or a compiler-specific behavior so I can try to fix the parser or submit a bug report if I can't.","questionMetadata":{"type":"version","tag":"c","level":"intermediate"},"answer":"The declaration is allowed by the standard.\n\n\nSectin 6.7.6p1 of the C standard gives the full syntax for a declaration, including the portion you quoted. The relevant parts are as follows:\n\n\nA `parameter-declaration` is defined as:\n\n\n\n> \n> parameter-declaration:\n> \n> \n> - declaration-specifiers declarator\n> - declaration-specifiers abstract-declaratoropt\n> \n> \n> \n\n\nA `declarator` is defined as:\n\n\n\n> \n> declarator:\n> \n> \n> - pointeropt direct-declarator\n> \n> \n> \n\n\nAnd a `direct-declarator` is defined (in part) as:\n\n\n\n> \n> direct-declarator:\n> \n> \n> - `(` declarator `)`\n> \n> \n> \n\n\nSo we can see from the above that a parameter name can be enclosed in parenthesis."}
{"questionId":"3dd0f77d3ff0eeaf3929b304646dea8d6d506d99db94e0ce7d9139e76d334438","question":"Pytest- How to remove created data after each test function\nI have a FastAPI + SQLAlchemy project and I'm using Pytest for writing unit tests for the APIs.\n\n\nIn each test function, I create some data in some tables (user table, post table, comment table, etc) using SQLAlchemy. These created data in each test function will remain in the tables after test function finished and will affect on other test functions.\n\n\nFor example, in the first test function I create 3 posts, and 2 users, then in the second test functions, these 3 posts and 2 users remained on the tables and makes my test expectations wrong.\n\n\nFollowing is my fixture for pytest:\n\n\n\n```\[email protected]\ndef session(engine):\n Session = sessionmaker(bind=engine)\n session = Session()\n yield session\n session.rollback() # Removes data created in each test method\n session.close() # Close the session after each test\n\n```\n\nI used `session.rollback()` to remove all created data during session, but it doesn't remove data.\n\n\nAnd the following is my test functions:\n\n\n\n```\nclass TestAllPosts(PostBaseTestCase):\n\n def create_logged_in_user(self, db):\n user = self.create_user(db)\n return user.generate_tokens()[\"access\"]\n\n def test_can_api_return_all_posts_without_query_parameters(self, client, session):\n posts_count = 5\n user_token = self.create_logged_in_user(session)\n for i in range(posts_count):\n self.create_post(session)\n\n response = client.get(url, headers={\"Authorization\": f\"Bearer {user_token}\"})\n assert response.status_code == 200\n json_response = response.json()\n assert len(json_response) == posts_count\n\n def test_can_api_detect_there_is_no_post(self, client, session):\n user_token = self.create_logged_in_user(session)\n response = client.get(url, headers={\"Authorization\": f\"Bearer {user_token}\"})\n assert response.status_code == 404\n\n```\n\nIn the latest test function, instead of getting 404, I get 200 with 5 posts (from the last test function)\n\n\nHow can I remove the created data in each test function after test function finished?","questionMetadata":{"type":"debugging","tag":"python","level":"intermediate"},"answer":"The problem is that there are **multiple sessions**.\n\n\nOne is used by your tests. The other one(s) is\/are used by the server.\n\n\nBecause you are using `client.get`, you are sending a request to the server, which will use its own database session.\n\n\n1. To solve your problem you can just truncate all tables at the end of each test: <https:\/\/stackoverflow.com\/a\/25220958\/5521670>\n\n\n\n```\[email protected]\ndef session(engine):\n Session = sessionmaker(bind=engine)\n session = Session()\n yield session\n\n # Remove any data from database (even data not created by this session)\n with contextlib.closing(engine.connect()) as connection:\n transaction = connection.begin()\n connection.execute(f'TRUNCATE TABLE {\",\".join(table.name for table in reversed(Base.metadata.sorted_tables)} RESTART IDENTITY CASCADE;'))\n transaction.commit()\n\n session.rollback() # Removes data created in each test method\n session.close() # Close the session after each test\n\n```\n\n2. Another alternative would be to make the server use your test session (just like the FastAPI documentation suggests): <https:\/\/fastapi.tiangolo.com\/advanced\/testing-database\/>\n\n\n\n```\ndef override_get_db():\n try:\n db = TestingSessionLocal()\n yield db\n finally:\n db.close()\n\n\napp.dependency_overrides[get_db] = override_get_db"}
{"questionId":"80922dcc09c3265f1aaaf027d10fa87227faa7b3861983bf0508f8816079cc2a","question":"Warning in property initialization in class with primary constructor\nI noticed that a snippet like the below one, it marks the property initialization with a warning.\n\n\n\n```\npublic sealed class C(int a)\n{\n public int A { get; } = a; \/\/<--here\n\n public int Sum(int b)\n {\n return a + b;\n }\n}\n\n```\n\nThe warning says:\n\n\n\n> \n> warning CS9124: Parameter 'int a' is captured into the state of the\n> enclosing type and its value is also used to initialize a field,\n> property, or event.\n> \n> \n> \n\n\nHowever, if I omit any further `a` variable usage, the warning disappears.\n\n\n\n```\npublic sealed class C(int a)\n{\n public int A { get; } = a;\n\n public int Sum(int b)\n {\n return b; \/\/<-- no more 'a' used here\n }\n}\n\n```\n\nNow, it is not very clear to me the reason of the warning, although I have a suspect. Is it because any `a` modification in the class will not change the `A` property, in this case?","questionMetadata":{"type":"conceptual","tag":"c#","level":"intermediate"},"answer":"This happens because compiler will generate one backing field for `a` used in `Sum` and another backing field for auto-property `A`.\n\n\nNote that `a` is mutable, while `A` is not hence you can do:\n\n\n\n```\npublic void MutateA(int i) => a += i;\n\n```\n\nWhich will affect `Sum` but will not affect `A`:\n\n\n\n```\nC c = new C(42);\nc.MutateA(7);\nConsole.WriteLine(c.A); \/\/ Prints 42\nConsole.WriteLine(c.Sum(0)); \/\/ Prints 49\n\npublic sealed class C(int a)\n{\n public int A { get; } = a; \/\/<--here\n\n public int Sum(int b)\n {\n return a + b;\n }\n\n public void MutateA(int i) => a += i;\n}\n\n```\n\nThe workaround\/fix would be to use `A` instead of `a` in the `Sum`:\n\n\n\n```\npublic int Sum(int b) => A + b;\n\n```\n\nSee also:\n\n\n- [Resolve errors and warnings in constructor declarations: Primary constructor declaration](https:\/\/learn.microsoft.com\/en-us\/dotnet\/csharp\/language-reference\/compiler-messages\/constructor-errors#primary-constructor-declaration)"}
{"questionId":"bfdafb4a4efaa430a5669a064f359f42c991a4f05df8320fd48be8c7e4012633","question":"The most efficient way to test if a positive integer is 2^n (i.e. 1, 2, 4, 8, etc.) in C++20?\nA handy method to verify if a positive integer `n` is a power of two (like 1, 2, 4, 8, etc.) is to use the following test for having no more than 1 bit set:\n\n\n\n```\nbool test = n & (n - 1) == 0;\n\n```\n\nThis operation can be very efficient because it only involves subtraction, a bitwise AND and a conditional branch on the Zero Flag (ZF). If this expression is evaluated to `true`, then the number `n` is indeed a power of two.\n\n\nAnother method uses the `std::popcount` (population count) function, which is part of the C++20 standard library, for the test:\n\n\n\n```\nbool test = std::popcount(n) == 1; \/\/ (Since C++20)\n\n```\n\nThis function counts the number of set bits (1s) in. If the hardware supports a popcount instruction (POPCNT), this function can be very fast.\n\n\n*In C++, you generally \u201cpay for what you use\u201d. For this test there is no use for counting.*\n\n\nWhat is the better method, in terms of CPU efficiency?","questionMetadata":{"type":"optimization","tag":"c++","level":"intermediate"},"answer":"You know your number is positive (which excludes zero) so you can indeed just use `n & (n-1) == 0` without checking `n != 0`. That's your most efficient option, potentially more efficient than C++20 `std::has_single_bit`\n\n\n`std::has_single_bit` needs to rule out the no-bits-set case. For that, it can be slightly more efficient on modern x86 to do `popcount(n) == 1` if the compiler can assume support for a hardware `popcnt` instruction, which is why `std::has_single_bit` is often defined that way in C++ standard libraries.\n\n\n**But since you know your number is non-zero, the bithack is most efficient.** Especially if compiling for a target where the compiler can't assume a hardware popcount (like x86 without `-march=x86-64-v2` or newer), or AArch64 before ARMv9.x where scalar popcount requires copying to vector regs and back. RISC-V only has hardware popcount in an uncommon extension, not baseline.\n\n\nOn x86, it can be as cheap as\n\n\n\n```\n lea eax, [rdi-1]\n test eax, edi\n # boolean condition in ZF\n jz or setz or cmovz or whatever\n\n```\n\nAnd similar on AArch64 with `sub` and `tst`. And pretty much any other modern RISC can subtract 1 while putting the result into a separate register, then AND those together cheaply.\n\n\n\n\n---\n\n\nAnd if you're compiling for `-march=x86-64-v3` or later (BMI2 + AVX2 + FMA = Haswell feature set), the compiler can use [BMI1 `blsr eax, edi`](https:\/\/www.felixcloutier.com\/x86\/blsr) to clear (\"reset\") the lowest set bit and set FLAGS accordingly, with ZF set according to the output being zero. CF is set if the *input* was zero so some [branch conditions](https:\/\/www.felixcloutier.com\/x86\/jcc) can check that `n!=0`. But unfortunately conditions like `jbe` are `CF=1 or ZF=1`, `ja` is `CF=0 and ZF=0`. There isn't a single FLAGS condition that checks for ZF=1 and CF=0 which would let `std::has_single_bit` compile to just `blsr` plus a single branch, cmov, or setcc. `ja` is taken if the input had multiple bits set, not-taken for power-of-2 or zero.\n\n\nUnlike `test`, `blsr` can't macro-fuse with a later `jcc` so it doesn't save any uops vs. `lea`\/`test` if you're branching on it. It is better on Intel if the compiler is using it branchlessly, like for `setnz` or `cmovnz`. `blsr` is a single uop on Intel, and on AMD Zen 4 and later. 2 uops on Zen 3 and earlier (<https:\/\/uops.info\/>)\n\n\n\n\n---\n\n\n### Non-popcount way to exclude zero\n\n\nFor use-cases where you can't assume a non-zero input and don't have cheap hardware popcount, there's an alternate bithack that's 3 operations instead of two: `(n - 1) < (n ^ (n - 1))`\n\n\nThe right hand side is what [x86 BMI1 `blsmsk`](https:\/\/www.felixcloutier.com\/x86\/blsmsk) computes, but if we have `blsmsk` we have `popcnt`. \n\n`n-1` is a common subexpression so we only need to compute it once. For example RISC-V; AArch64 could be similar with cmp\/bltu\n\n\n\n```\n addi x1, x0, -1 # n-1\n xor x2, x1, x0 # n ^ (n-1)\n bltu x1, x2, power_of_two # branch on a comparison\n\n```\n\nFor zero, `n-1` is UINT\\_MAX, and `n ^ anything` is a no-op, so both sides are equal.\n\n\nFor a power of two, `n-1` sets all the bits below where the set bit was, and XOR sets that bit again. So it's larger than `n`, and also larger than `n-1`.\n\n\nFor a non-power-of-two, `n ^ (n-1)` is still just a mask up to and including the lowest set bit, with the high bits cancelled (the ones that `n-1` didn't flip). So it's smaller than `n-1`.\n\n\n[https:\/\/graphics.stanford.edu\/~seander\/bithacks.html#DetermineIfPowerOf2](https:\/\/graphics.stanford.edu\/%7Eseander\/bithacks.html#DetermineIfPowerOf2) also suggests `v && !(v & (v - 1));` but I don't think that's better since logical `&&` has to check both sides for non-zero."}
{"questionId":"f6de4efcc5969f182cdba51ba7bdbe7b8ef3d9b5667ff3fea2ea3ab1f3d37779","question":"CEF4Delphi application can't run two instances\nI have a Delphi application with an embedded CEF browser and it has stopped working since I updated if from CEF 117.1.4 and Chromium 117.0.5938.92 to CEF 123.0.12 and Chromium 123.0.6312.107.\n\n\nWith CEF 117 I can run two instances of the application with no issue, but now it fails on the second instance startup:\n\n\n\n```\nbegin\n GlobalCEFApp := TCefApplication.Create;\n\n InicializaCef;\n\n \/\/ Reducir el n\u00famero de locales a un m\u00ednimo\n GlobalCEFApp.LocalesRequired := 'ca,de,en-GB,en-US,es-419,es,fr,it,pt-BR,pt-PT';\n GlobalCEFApp.SetCurrentDir := True;\n GlobalCEFApp.LocalesDirPath := 'locales';\n\n Application.Initialize;\n Application.Title := 'QBrowser';\n Application.CreateForm(TMainForm, MainForm);\n test := GlobalCEFApp.StartMainProcess;\n if test then\n Application.Run;\n\n GlobalCEFApp.Free;\n GlobalCEFApp := nil;\nend.\n\n```\n\nGlobalCEFApp.StartMainProcess is now returning False.\n\n\nIs there some new configuration value I'm overlooking?","questionMetadata":{"type":"version","tag":"delphi","level":"intermediate"},"answer":"CEF changed the way it initializes and now it checks if another app is running with the same `RootCache` setting. This feature was added in CEF 120.1.8.\n\n\nIf `GlobalCEFApp.Cache` and `GlobalCEFApp.RootCache` are empty then the default platform specific directory will be used. In the case of Windows: `%AppData%\\Local\\CEF\\User Data\\`.\n\n\nUse of the default directory is not recommended in production applications. Multiple application instances writing to the same `GlobalCEFApp.RootCache` directory could result in data corruption.\n\n\nThere are two ways to avoid this:\n\n\n1. Implement `GlobalCEFApp.OnAlreadyRunningAppRelaunch` to be notified when a new app instance is starting and open a new tab or child form with a web browser.\n2. Use a different `GlobalCEFApp.RootCache` directory for each app instance.\n\n\nRead [the documentation](https:\/\/github.com\/salvadordf\/CEF4Delphi\/tree\/master\/docs) for all the details (search for `TCefApplicationCore` as type) about:\n\n\n- `GlobalCEFApp.OnAlreadyRunningAppRelaunch`:\n\n\n\n> \n> Implement this function to provide app-specific behavior when an already running app is relaunched with the same `TCefSettings.root_cache_path` value. For example, activate an existing app window or create a new app window. `command_line` will be read-only. Do not keep a reference to `command_line` outside of this function. Return `true` (`1`) if the relaunch is handled or `false` (`0`) for default relaunch behavior. Default behavior will create a new default styled Chrome window.\n> \n> \n> To avoid cache corruption only a single app instance is allowed to run for a given `TCefSettings.root_cache_path` value. On relaunch the app checks a process singleton lock and then forwards the new launch arguments to the already running app process before exiting early. Client apps should therefore check the `cef_initialize()` return value for early exit before proceeding.\n> \n> \n> This function will be called on the browser process UI thread.\n> \n> \n>\n- `GlobalCEFApp.RootCache`:\n\n\n\n> \n> The root directory for installation-specific data and the parent directory for profile-specific data. All `TCefSettings.cache_path` and `ICefRequestContextSettings.cache_path` values must have this parent directory in common. If this value is empty and `TCefSettings.cache_path` is non-empty then it will default to the `TCefSettings.cache_path` value. Any non-empty value must be an absolute path. If both values are empty then the default platform-specific directory will be used (`~\/.config\/cef_user_data` directory on Linux, `~\/Library\/Application Support\/CEF\/User Data` directory on MacOS, `AppData\\Local\\CEF\\User Data` directory under the user profile directory on Windows). Use of the default directory is not recommended in production applications (see below).\n> \n> \n> Multiple application instances writing to the same `root_cache_path` directory could result in data corruption. A process singleton lock based on the `root_cache_path` value is therefore used to protect against this. This singleton behavior applies to all CEF-based applications using version 120 or newer. You should customize `root_cache_path` for your application and implement `ICefBrowserProcessHandler.OnAlreadyRunningAppRelaunch`, which will then be called on any app relaunch with the same `root_cache_path` value.\n> \n> \n> Failure to set the `root_cache_path` value correctly may result in startup crashes or other unexpected behaviors (for example, the sandbox blocking read\/write access to certain files).\n> \n> \n>\n- `GlobalCEFApp.Cache`:\n\n\n\n> \n> The directory where data for the global browser cache will be stored on disk. If this value is non-empty then it must be an absolute path that is either equal to or a child directory of `TCefSettings.root_cache_path`. If this value is empty then browsers will be created in \"incognito mode\" where in-memory caches are used for storage and no profile-specific data is persisted to disk (installation-specific data will still be persisted in root\\_cache\\_path). HTML5 databases such as `localStorage` will only persist across sessions if a cache path is specified. Can be overridden for individual `ICefRequestContext` instances via the `ICefRequestContextSettings.cache_path` value. When using the Chrome runtime any child directory value will be ignored and the \"default\" profile (also a child directory) will be used instead.\n> \n> \n>"}
{"questionId":"acef2e5b7bbe77f8cd3a08c279721f554d286f7334837b4de3c35ad2c04036b7","question":"polars rolling by option not allowed?\nI have a data frame of the type:\n\n\n\n```\ndf = pl.LazyFrame({\"day\": [1,2,4,5,2,3,5,6], 'type': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], \"value\": [1, 0, 3, 4, 2, 2, 0, 1]})\n\n\nday type value\ni64 str i64\n1 \"a\" 1\n2 \"a\" 0\n4 \"a\" 3\n5 \"a\" 4\n2 \"b\" 2\n3 \"b\" 2\n5 \"b\" 0\n6 \"b\" 1\n\n\n```\n\nI am trying to create a rolling sum variable, summing, for each different \"type\", the values in a two days window. Ideally, the resulting dataset would be the following:\n\n\n\n\n| day | type | value | rolling\\_sum |\n| --- | --- | --- | --- |\n| 1 | a | 1 | 1 |\n| 2 | a | 0 | 1 |\n| 4 | a | 3 | 3 |\n| 5 | a | 4 | 7 |\n| 2 | b | 2 | 2 |\n| 3 | b | 2 | 4 |\n| 5 | b | 0 | 0 |\n| 6 | b | 1 | 1 |\n\n\nI tried using the following code:\n\n\n\n```\ndf = df.with_columns(pl.col(\"value\")\n .rolling(index_column=\"day\", by=\"type\", period=\"2i\")\n .sum().alias(\"rolling_sum\"))\n\n```\n\nbut I get the error: \"TypeError: rolling() got an unexpected keyword argument 'by'\".\n\n\nCould you help me fix it?","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"That's because in your code you're trying to use [`Expr.rolling()`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/expressions\/api\/polars.Expr.rolling.html) which doesn't have `by` parameter (strangely, it is mentioned in the documentation under `check_sorted` parameter - is it just not implemented yet?), instead of [`DataFrame.rolling()`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/dataframe\/api\/polars.DataFrame.rolling.html).\n\n\nIf you'd restructure the code to use the latter then it works fine:\n\n\n\n```\n(\n df.rolling(\n index_column=\"day\", by=\"type\", period=\"2i\"\n )\n .agg(\n pl.col('value').sum().alias(\"rolling_sum\")\n )\n)\n\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 type \u2506 day \u2506 rolling_sum \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2502\n\u2502 str \u2506 i64 \u2506 i64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 a \u2506 1 \u2506 1 \u2502\n\u2502 a \u2506 2 \u2506 1 \u2502\n\u2502 a \u2506 4 \u2506 3 \u2502\n\u2502 a \u2506 5 \u2506 7 \u2502\n\u2502 b \u2506 2 \u2506 2 \u2502\n\u2502 b \u2506 3 \u2506 4 \u2502\n\u2502 b \u2506 5 \u2506 0 \u2502\n\u2502 b \u2506 6 \u2506 1 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\nIf you need to have `value` column in your result, you can use [`Expr.rolling_sum()`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/expressions\/api\/polars.Expr.rolling_sum.html) combined with [`Expr.over()`](https:\/\/pola-rs.github.io\/polars\/py-polars\/html\/reference\/expressions\/api\/polars.Expr.over.html) instead (assuming your DataFrame is sorted by `day` already):\n\n\n\n```\ndf.with_columns(\n pl.col(\"value\")\n .rolling_sum(window_size=2,min_periods=0)\n .over(\"type\")\n .alias('rolling_sum')\n)\n\n\u250c\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 day \u2506 type \u2506 value \u2506 rolling_sum \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2506 --- \u2502\n\u2502 i64 \u2506 str \u2506 i64 \u2506 i64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 1 \u2506 a \u2506 1 \u2506 1 \u2502\n\u2502 2 \u2506 a \u2506 0 \u2506 1 \u2502\n\u2502 4 \u2506 a \u2506 3 \u2506 3 \u2502\n\u2502 5 \u2506 a \u2506 4 \u2506 7 \u2502\n\u2502 2 \u2506 b \u2506 2 \u2506 2 \u2502\n\u2502 3 \u2506 b \u2506 2 \u2506 4 \u2502\n\u2502 5 \u2506 b \u2506 0 \u2506 2 \u2502\n\u2502 6 \u2506 b \u2506 1 \u2506 1 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\nIdeally, I would probably expect `Expr.rolling` together with `Expr.over` to work:\n\n\n\n```\n# something like this\ndf.with_columns(\n pl.col(\"value\")\n .rolling(index_column=\"day\", period=\"2i\")\n .sum()\n .over(\"type\")\n .alias('rolling_sum')\n)\n\n# or this\ndf.set_sorted(['type','day']).with_columns(\n pl.col(\"value\")\n .sum()\n .over('type')\n .rolling(index_column=\"day\", period=\"2i\")\n .alias('rolling_sum')\n)\n\n```\n\nbut unfortunately, it doesn't:\n\n\n\n```\nInvalidOperationError: rolling expression not allowed in aggregation\n\n```\n\n**Update**\n\n\nUsing `rolling_sum()` might not be something you want, if you plan your window to be based on days \/ weeks etc.\nIn this case you can still use `DataFrame.rolling()` and combine it with [`Expr.last()`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/expressions\/api\/polars.Expr.last.html) inside of [`GroupBy.agg()`](https:\/\/docs.pola.rs\/py-polars\/html\/reference\/dataframe\/api\/polars.dataframe.group_by.GroupBy.agg.html) to get the last value in the window:\n\n\n\n```\n(\n df.rolling(\n index_column=\"day\", by=\"type\", period=\"2i\"\n )\n .agg(\n pl.col('value').last(),\n pl.col('value').sum().alias(\"rolling_sum\")\n )\n)\n\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 type \u2506 day \u2506 value \u2506 rolling_sum \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2506 --- \u2502\n\u2502 str \u2506 i64 \u2506 i64 \u2506 i64 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 a \u2506 1 \u2506 1 \u2506 1 \u2502\n\u2502 a \u2506 2 \u2506 0 \u2506 1 \u2502\n\u2502 a \u2506 4 \u2506 3 \u2506 3 \u2502\n\u2502 a \u2506 5 \u2506 4 \u2506 7 \u2502\n\u2502 b \u2506 2 \u2506 2 \u2506 2 \u2502\n\u2502 b \u2506 3 \u2506 2 \u2506 4 \u2502\n\u2502 b \u2506 5 \u2506 0 \u2506 0 \u2502\n\u2502 b \u2506 6 \u2506 1 \u2506 1 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518"}
{"questionId":"f518a3d8a277b6bc33e027ca3a5e23722e3aa19566a044b95af2e08ac83c1c51","question":"Require a field to be an integer or null, but not missing in JSON\nI want to deserialize the following struct using serde\\_json. The `parent_id` field should accept an integer or a null, but I want it to return an error if the field is missing.\n\n\n\n```\n#[derive(Debug, Serialize, Deserialize)]\npub struct Group {\n pub name: String,\n pub parent_id: Option<i32>,\n}\n\n\/\/ this is accepted\nserde_json::from_value(json!({\n \"name\": \"Group 1\",\n \"parent_id\": null,\n});\n\n\/\/ this should return an error\nserde_json::from_value(json!({\n \"name\": \"Group 1\",\n});\n\n```\n\nI tried using the code above, but `parent_id` would be deserialized into a `None` even though it doesn't exist.","questionMetadata":{"type":"implementation","tag":"rust","level":"intermediate"},"answer":"You can [`deserialize_with` with custom function](https:\/\/serde.rs\/field-attrs.html#deserialize_with) to get the expected behaviour.\n\n\n\n```\nuse serde::{Deserialize, Deserializer, Serialize};\nuse serde_json::json;\n\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct Group {\n pub name: String,\n #[serde(deserialize_with = \"Option::deserialize\")]\n pub parent_id: Option<i32>,\n}\n\nfn main() {\n \/\/ thread 'main' panicked at src\/main.rs:25:10:\n \/\/ called `Result::unwrap()` on an `Err` value: Error(\"missing field \n \/\/ `parent_id`\", line: 0, column: 0)\n let r: Group = serde_json::from_value(json!({\n \"name\": \"Group 1\",\n })).unwrap();\n println!(\"{:?}\", r);\n\n}"}
{"questionId":"29dec689ef51607453b66f5b19739867a08d715979774d443b384208d361f00e","question":"Why ngModel doesn't works on the last version of Angular 17?\nI am trying to make a form in my angular app, but when i want to implement ngModel on my form :\n\n\n\n```\n <form (ngSubmit)=\"onSignUp()\" #signupForm=\"ngForm\">\n <h1>Connexion<\/h1>\n <input type=\"email\" name=\"mail\" [(ngModel)]=\"userLogin.email\" placeholder=\"Email\" \/>\n <input type=\"password\" name=\"mdp\" [(ngModel)]=\"userLogin.password\" placeholder=\"Password\" \/>\n <a href=\"#\">Mot de passe oublie ?<\/a>\n <button type=\"submit\">Se Connecter<\/button>\n <\/form>\n\n```\n\nI have this error :\n\n\n**NG8002: Can't bind to 'ngModel' since it isn't a known property of 'input'. [plugin angular-compiler]**\n\n\nI can't import FormsModule in the app.module.ts because this file doesn't exists on Angular 17, i only have an app.config.ts file.\n\n\nCan someone please explain me how to do?","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"If your component is set with `standalone: true`, then you need to add `FormsModule` to the `imports` array of the component!\n\n\nSince it's standalone, we need to add all the necessary dependencies to the imports array!\n\n\n\n```\n@Component({\n ...\n imports: [\n ...\n FormsModule,\n ...\n ],\n ...\n})"}
{"questionId":"09569bbf440319ae71fa8832e17f7c3f391a0b0c27b6e30f9b22d8b1edd42dc4","question":"Hilt with ksp instead of kapt\nhow to use hilt with ksp instead of kapt seems like i can't figure it out\nplease let me know what dependencies should i add and how should i add them\n\n\ndependencies i added:\n\n\n\n```\n\/\/hilt\n val hiltVersion = \"2.51\" \n implementation(\"com.google.dagger:hilt-android:$hiltVersion\")\n ksp(\"com.google.dagger:hilt-android-compiler:$hiltVersion\")\n ksp(\"com.google.dagger:hilt-compiler:$hiltVersion\")\n\n```\n\nplugins:\n\n\n\n```\nplugins {\n id(\"com.android.application\")\n id(\"org.jetbrains.kotlin.android\")\n id (\"com.google.dagger.hilt.android\")\n id(\"com.google.devtools.ksp\") version \"1.9.22-1.0.17\"\n}\n\n```\n\nbuild gradle:\n\n\n\n```\nplugins {\n id(\"com.android.application\") version \"8.2.2\" apply false\n id(\"org.jetbrains.kotlin.android\") version \"1.9.0\" apply false\n id(\"com.google.dagger.hilt.android\") version \"2.51\" apply false\n id(\"com.google.devtools.ksp\") version \"1.9.22-1.0.17\" apply false\n}\n\n```\n\ni tried different hilt versions like 2.48.1\ndifferent kotlinCompilerExtensionVersion like 1.5.8\n\n\nnothing seems to work i've got multiple different errors don't know what i'm doing neither do i know what i'm doing wrong","questionMetadata":{"type":"version","tag":"kotlin","level":"intermediate"},"answer":"When using `kotlin`, `ksp` and `compose` you have to keep in mind to use versions that are compatible with each other, otherwise building the project will most likely fail.\n\n\n**Kotlin and KSP**\n\n\nTake a look at [releases](https:\/\/github.com\/google\/ksp\/releases), ksp version always consist of two parts e.g. `1.9.23-1.0.20` where `1.9.23` is kotlin version and `1.0.20` is actual KSP version (i think).\n\n\n**Kotlin and Compose**\n\n\nList of compatible versions can be found in [Android docs](https:\/\/developer.android.com\/jetpack\/androidx\/releases\/compose-kotlin).\n\n\n**Your case**\n\n\nSince you are using **kotlin** `1.9.0` you should use **KSP** `1.9.0-1.0.13` and **kotlinCompilerExtensionVersion** `1.5.2`. For the dagger it should word fine for version `2.48` and above based on [this](https:\/\/dagger.dev\/dev-guide\/ksp.html), so version `2.51` is fine."}
{"questionId":"8e033749fe691b28341689bae7a0aff668972af8edb12e99e94f947a9a4883d2","question":"Argument of type 'EnvironmentProviders' is not assignable to parameter of type 'ImportProvidersSource'.ts(2345) in Angular Firebase project\nI am working on an Angular project that is deployed on Firebase. All the tutorials about Firebase suggest the following way to store Firebase in the `app.config.ts`:\n\n\n\n```\nexport const appConfig: ApplicationConfig = {\n providers: [\n importProvidersFrom(\n provideFirebaseApp(() => initializeApp(environment.firebase)), \/* The problem line *\/\n provideFirestore(() => getFirestore()),\n ),\n provideRouter(routes)\n ],\n};\n\n```\n\nI am currently have an error that is marked in VS Code and reported during `ng serve`: \"Argument of type 'EnvironmentProviders' is not assignable to parameter of type 'ImportProvidersSource'.ts(2345)\"\n\n\nAnd I have no idea what to do and how to resolve that. I hope someone here can help me with that.\n\n\nI tried many times different things. I have totally deleted and recreated my Firebase project, my application in Firebase, my Angular project. Nothing works.\n\n\nIf I am deleting the problem line, then the next one reporting the same problem.","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"This is related to Angular version not angular\/fire, to solve the problem just remove the `importProvidersFrom` as it's no longer needed."}
{"questionId":"203d92862b92bec7ca24558512d723bc3a330162df45cc8bbfc31386c943dde5","question":"R. get\\_legend() from cowplot package no longer work for ggplot2 version 3.5.0\nget\\_legend() now returns error for ggplot object\n\n\nI used to do get\\_legend(plot) to extract the legend part of the plot. It still works for ggplot2 version 3.4.4. Now that I updated my ggplot2 to version 3.5.0. I think it changes the way legend is stored.\nNow get\\_legend(plot) will return nothing and give the warning:\n\n\n\n```\nget_legend(plot)\n\nWarning messages:\n1: In get_plot_component(plot, \"guide-box\") :\n Multiple components found; returning the first one. To return all, use `return_all = TRUE`.\n\n```\n\nI have also tried get\\_plot\\_component(). It doesn't work, nor does list(get\\_plot\\_component()) work.\n\n\n\n```\nlegend = get_plot_component(plot, 'guide-box', return_all = TRUE)\nggdraw(legend)\n\nWarning messages:\n1: In as_grob.default(plot) :\n Cannot convert object of class list into a grob.\n\n```\n\nIs there any other ways to extract the legend of the plot? Thank you!","questionMetadata":{"type":"version","tag":"r","level":"intermediate"},"answer":"As a temporary solution, change your second argument to `get_plot_component()`.\n\n\nYou have `legend = get_plot_component(plot, 'guide-box', return_all = TRUE)`.\n\n\nChange the value `guide-box` to one of the following values: `guide-box-right`, `guide-box-left`, `guide-box-bottom`, `guide-box-top`, or `guide-box-inside`.\n\n\nExample:\n\n\n\n```\ndf <- data.frame(pet = c(\"cat\",\"dog\",\"snake\"),\n count = c(20,5,94))\n\nplot <- ggplot2::ggplot(df, ggplot2::aes(pet, count, fill = pet)) +\n ggplot2::geom_col() +\n ggplot2::theme(legend.position = \"top\") # <- specify position of legend\n\nlegend = cowplot::get_plot_component(plot, 'guide-box-top', return_all = TRUE)\ncowplot::ggdraw(legend)"}
{"questionId":"4b390822f371a738b8545f51de113af74e742c29166026f89c91b57b4d82f643","question":"Azure Function app (.NET 8) not logging info to app insights\nI have a .NET 8 Azure Function app which I have been working on, this is the first .NET 8 app that I have created, though I have worked on function apps since .NET Core 3.1.\n\n\nI'm sure this is something really basic that I am missing, but I can't see what.\n\n\nThis is my `host.json`\n\n\n\n```\n{\n \"version\": \"2.0\",\n \"logging\": {\n \"logLevel\": {\n \"default\": \"debug\"\n },\n \"applicationInsights\": {\n \"samplingSettings\": {\n \"isEnabled\": true\n \"excludedTypes\": \"Request\"\n }\n }\n }\n}\n\n```\n\nThis is the function:\n\n\n\n```\n[Function(\"MYFUNCNAME\")]\npublic void Run([TimerTrigger(\"0 *\/1 * * * *\")] TimerInfo myTimer)\n{\n _logger.LogError(\"LOCAL THIS IS AN ERROR MESSAGE WHICH IS LOGGED\");\n \n _logger.LogInformation($\"THIS WONT BE LOGGED IN AI - THOUGH IT DOES SHOW IN THE CONSOLE\");\n\n _logger.LogDebug(\"THIS IS NOT LOGGED EITHER\");\n}\n\n```\n\nIn the console I can see ALL of the messages.\n\n\nHowever in app insights I can only see the error messages.\n\n\nI want all of the messages in AI as well as in the console.\n\n\nI am at a loss, I have another function app which has the same configuration and it works perfectly. I cant see any difference between configurations in the function app or app insights between the working and non working projects (except the working one is on .NET 6 and the non working one is on .NET 8).\n\n\nWhat could I be missing here?","questionMetadata":{"type":"version","tag":"c#","level":"intermediate"},"answer":"Had the same issue with .net7 Functions in Isolated Mode, I'd guess your old function app did not run in isolated mode and the new one does?\n\n\nIf that is the case then you need configure the `LogFilterOptions` and remove the `ApplicationInsightsLoggingProvider`\n\n\nThis is done in your hostBuilder (found in program.cs)\n\n\nYou just need to add the following after the `services.ConfigureFunctionsApplicationInsights()` call.\n\n\n\n```\nservices.Configure<LoggerFilterOptions>(options =>\n{\n var appInsightsLoggerProvider = options.Rules.FirstOrDefault(rule => rule.ProviderName == \"Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider\");\n\n if (appInsightsLoggerProvider != default) options.Rules.Remove(appInsightsLoggerProvider);\n});\n\n```\n\nIts not very well documented, at least I didn't think so, but if you google \"azure functions dotnet isolated process guide\" there is a microsoft learn page which talks about the isolated model - its not explicitly talked about in there *BUT* one of the code samples (in the startup and configuration section) includes something very similar to the code above.\n\n\nAfter that it should work."}
{"questionId":"f3f4c0999c6bf5075f47d0089a95b93e4cfe33c6cfba784c77eb985b49c88757","question":"Multiplication of huge massive of numbers in python\nI'm working on a small python program for myself and I need an algorithm for fast multiplication of a huge array with numbers (over 660 000 numbers, each is 9 digits). The result number is over 4 millions digits. Currently I'm using math.prod, which calculates it in ~10 minutes, but that's too slow, especially if I want to increase amount of numbers.\n\n\nI checked some algorithms for faster multiplications, for example Sch\u00f6nhage\u2013Strassen algorithm and Toom\u2013Cook multiplication, but I didn't understand how they works or how to make them. I tried some versions that I've found on the internet, but they're not working too well and are even slower. I wonder if someone knows how to multiplicate these amounts of numbers faster, or could explain how to use some math to do this?","questionMetadata":{"type":"optimization","tag":"python","level":"intermediate"},"answer":"There are two keys to making this fast. First, using the fastest mult implementation you can get. For \"sufficiently large\" multiplicands, Python's Karatsuba mult is `O(n^1.585)`. The `decimal` module's much fancier NTT mult is more like `O(n log n)`. But fastest of all is to install the `gmpy2` extension package, which wraps GNU's GMP library, whose chief goal is peak speed. That has essentially the same asymptotics as `decimal` mult, but with a smaller constant factor.\n\n\nSecond, the advanced mult algorithms work best when multiplying two large ints of about the same size (number of bits). You can leave that to luck, or, as below, you can force it by using a priority queue and, at each step, multiplying the \"two smallest\" partial products remaining.\n\n\n\n```\nfrom gmpy2 import mpz\nfrom heapq import heapreplace, heappop, heapify\n\n# Assuming your input ints are in `xs`.\nmpzs = list(map(mpz, xs))\nheapify(mpzs)\nfor _ in range(len(mpzs) - 1):\n heapreplace(mpzs, heappop(mpzs) * mpzs[0])\nassert len(mpzs) == 1\n# the result is mpzs[0]\n\n```\n\nThat's the code I'd use. Note that the cost of recursion (which this doesn't use) is trivial compared to the cost of huge-int arithmetic. Heap operations are more expensive than recursion, but still relatively cheap, and can waaaaay more than repay their cost if the input is in an order such that the \"by luck\" methods aren't lucky enough."}
{"questionId":"dc75cae4b28e714e97f030e80466ee8063275ece84fd67a9a1ff8b4a7e41f5db","question":"Incorrect return type allowed for forward declared function: Why is there no linker error here?\n$ g++ --version\nConfigured with: --prefix=\/Library\/Developer\/CommandLineTools\/usr --with-gxx-include-dir=\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX.sdk\/usr\/include\/c++\/4.2.1\nApple clang version 12.0.0 (clang-1200.0.32.29)\nTarget: x86_64-apple-darwin23.4.0\nThread model: posix\nInstalledDir: \/Library\/Developer\/CommandLineTools\/usr\/bin\n\n```\n\na.cc\n\n\n\n```\n#include<iostream>\n\nusing namespace std;\n\nstatic int x = 5053;\n\nvoid f2();\n\nint main() {\n cout << \"a: \" << x << endl;\n f2();\n return 0;\n}\n\n```\n\nb.cc\n\n\n\n```\n#include<iostream>\n\nusing namespace std;\n\nstatic int x = 4921;\n\nstring f2() {\n cout << \"b: \" << x << endl;\n return \"\";\n}\n\n```\n\nOutput\n\n\n\n```\n$ g++ --std=c++17 a.cc b.cc && .\/a.out\na: 5053\nb: 4921\n\n```\n\nWhy was I able to forward declare `string f2();` from `b.cc` as `void f2();` in `a.cc`?\n\n\nAny references to cppreference or spec that allows this would be appreciated.","questionMetadata":{"type":"version","tag":"c++","level":"advanced"},"answer":"We don't *always* get a linker error because the C++ standard does not require a linker error or any other error in this case, and the implementations *most of the time* do not go the extra mile.\n\n\nThe standard does not require an error because the common linker technology does not allow the implementation to detect such errors.\n\n\nStroustrup decided to leave the return type out of C++ [name mangling](https:\/\/en.wikipedia.org\/wiki\/Name_mangling), so that names of two functions that differ only by their return type mangle to the same symbol name. This makes some ODR violation errors silently go undetected. Making the return type participate in the mangling would make *other* ODR violation errors go undetected.\n\n\nThere is no way to make *all* ODR-violation errors detectable with existing commonly available linkers. So the standard essentially permits the implementation to do what the original Stroustrup's compiler did: leave the return type out of the name mangling.\n\n\nThe compiler you use does just that. So it is something like `_Z2f2v` in the object file. You can see that by typing `nm a.o | grep f2` and `nm b.o | grep f2`, or looking at the assembly output on [the compiler explorer](https:\/\/godbolt.org\/z\/bhns81xYv) (use `clang`, add `-stdlib=libc++` to the options and deselect \"demangle symbols\" in the menu). There is no indication anywhere that the function is supposed to return any specific type.\n\n\nSo why does `gcc` detect this error then?\n\n\nThat's because *sometimes* `gcc` does include the return type in the name mangling, and your program just happens to hit that special case. It has to do with the big great ABI breakage of C++11. `gcc` and `libstdc++` had to change the layout of some standard library classes, notably `std::string` (also `std::list` but hey who uses that?) So the `gcc` authors did a clever thing to maintain backwards compatibility: they changed the mangling of *everything* that involves these classes --- and this time in the return type too. This way, old ABI code cannot link with new ABI code without errors. The names won't match.\n\n\nYou can confirm that by invoking `nm` on objects compiled with `g++` [or going to the compiler explorer again](https:\/\/godbolt.org\/z\/b46ov3PYv). You will see that an object than mentions `void f2()` still has something like `_Z2f2v` in it, but an object that mentions `string f2()` has something like `_Z2f2B5cxx11v` (note `cxx11` in the name -- this way we know we are dealing with the new post-C++11 ABI). All is well, but if you use both functions in the same program (in different translation units), the compiler won't detect it, which is the other kind of undetectable ODR violation (still allowed by the standard).\n\n\nThis does not happen with other types, so if you change `f2` to return say `std::vector<char>`, you will *not* get a linker error with either compiler."}
{"questionId":"c7031255e2461d28ab68f33f62650ace3dd888a9f781b3db4859119d8245abaa","question":"log rotation problem on custom application\nI have server program, written in C++. I usually start it as\n\n\n\n```\ndb_net 2> log_file\n\n```\n\nI do not use anything special about logging, just print on stderr.\n\n\nHowever sometimes, the log\\_file become huge. If I do\n\n\n\n```\necho > log_file\n\n```\n\nIt does not have effect, the file is not shrieked to zero. I know this worked 10 years ago, but now it does not.\n\n\nQuestion is following:\n\n\n- Can this be fixed from OS side somehow (I am running on Linux), without restarting the application\n- if it can not be fixed from the OS, can I fix this from the application? I can close and open stderr, but I do not really know the filename where stderr points to, since it is redirected. C solution is OK too.\n\n\nI know MySQL doing this with `flush logs;` and apache \/ nginx can do the same, however they know the filename and in their case is easy to reopen.\n\n\nI know I can implement log file inside configuration, but currently I am looking for fast solution.\n\n\nFilesystem is XFS.","questionMetadata":{"type":"debugging","tag":"c++","level":"intermediate"},"answer":"Use append mode:\n\n\n\n```\ndb_net 2>> log_file\n\n```\n\nNow when you manually truncate the file, the process will just continue to append to the new, empty file."}
{"questionId":"8642a14b40c55da56f1a211f736328761bfdf2760b88e231ccd587ed046899b9","question":"Byte-wise iteration through an array in generic function in Ada\nI have an array in a generic function where I want to iterate through it byte by byte. In the following example I am using `Wide_Wide_Character`'s, so a character is 4 bytes long. With 3 given input characters, I would have 12 bytes to iterate through. How can I achieve this? Thanks for your time.\n\n\n\n```\nwith Ada.Text_IO;\n\nprocedure Test is\n\n generic\n type T is (<>);\n type T_Index is range <>;\n type T_Array is array (T_Index range <>) of T;\n function A (b : T_Array) return Integer;\n\n function A (b : T_Array) return Integer is\n begin\n\n -- how can I iterate here through each byte instead of \"byte blocks\"?\n -- should be 12 bytes in this example (3 characters * 4 bytes)\n -- can I map b to System.Storage_Elements.Storage_Array (without copying it)?\n for I in 1 .. b'Length loop\n Ada.Text_IO.Put_Line (b (T_Index (I))'Image);\n end loop;\n\n return 1;\n end A;\n\n function A1 is new A (Wide_Wide_Character, Positive, Wide_Wide_String);\n\n unused : Integer := A1 (\"abc\");\n\nbegin\n\n null;\n\nend Test;","questionMetadata":{"type":"implementation","tag":"ada","level":"intermediate"},"answer":"This can be done. The recommended way is to introduce a nested loop as shown in the example below and convert each array element to a storage array.\n\n\nConverting the array `b` to a storage array as a whole in one go is discouraged as an array may not be contiguously stored in memory if the index type is an enumeration type represented by non-contiguous values. In such case, the array may be stored in memory with \"holes\" depending on the compiler implementation.\n\n\n**test.adb**\n\n\n\n```\nwith Ada.Text_IO;\nwith Ada.Unchecked_Conversion;\nwith System.Storage_Elements;\n\nprocedure Test with SPARK_Mode is\n\n generic\n type T is (<>);\n type T_Index is range <>;\n type T_Array is array (T_Index range <>) of T;\n procedure A (b : T_Array);\n\n procedure A (b : T_Array) is\n\n package SSE renames System.Storage_Elements;\n use type SSE.Storage_Count;\n\n Num_Storage_Elems : constant SSE.Storage_Count :=\n T'Size \/ SSE.Storage_Element'Size;\n\n subtype T_As_Storage_Array is SSE.Storage_Array (1 .. Num_Storage_Elems);\n\n -- Conversion per array element.\n function To_Storage_Array is new Ada.Unchecked_Conversion\n (Source => T, Target => T_As_Storage_Array);\n\n begin\n for Elem of b loop\n for Storage_Elem of To_Storage_Array (Elem) loop\n Ada.Text_IO.Put_Line (Storage_Elem'Image);\n end loop;\n end loop;\n\n end A;\n\n procedure A1 is new A (Wide_Wide_Character, Positive, Wide_Wide_String);\n\nbegin\n A1 (\"abc\");\nend Test;\n\n```\n\n**output**\n\n\n\n```\n$ test\n97\n 0\n 0\n 0\n 98\n 0\n 0\n 0\n 99\n 0\n 0\n 0"}
{"questionId":"11a59e8a21e18d81037fb2507a9fd8dcfd1e28f029c90f6dfc48494f080e9e2f","question":"Difference between \"async let\" and \"async let await\"\nI know that we wait with `await` and execute a task without need to wait with `async let`, but I can't understand the difference between these two calls:\n\n\n\n```\nasync let resultA = myAsyncFunc()\nasync let resultB = await myAsyncFunc()\n\n```\n\nIn my experiment, both of these seem to behave exactly the same, and the `await` keyword does not have any effects here, but I'm afraid I'm missing something.\n\n\nThanks in advance for explanation on this. \ud83d\ude4f\ud83c\udffb\n\n\n\n\n---\n\n\n##### Update\n\n\nI'm adding a working sample so you can see the behavior\n\n\n\n```\nfunc myAsyncFuncA() async -> String {\n print(\"A start\")\n try? await Task.sleep(for: .seconds(6))\n return \"A\"\n}\n\nfunc myAsyncFuncB() async -> String {\n print(\"B start\")\n try? await Task.sleep(for: .seconds(3))\n return \"B\"\n}\n\nasync let resultA = myAsyncFuncA()\nasync let resultB = await myAsyncFuncB()\nprint(\"Both have been triggered\")\nawait print(resultA, resultB)\n\n```\n\nResults:\n\n\n\n```\nA start \/\/ Immediately\nB start \/\/ Immediately\nBoth have been triggered \/\/ Immediately\nA B \/\/ After 6 seconds\n\n```\n\nSo as you can see, `resultA` does not block the context and the total waiting time is the biggest waiting time.","questionMetadata":{"type":"conceptual","tag":"swift","level":"intermediate"},"answer":"You asked what is the \u201cdifference between \u2018async let\u2019 and \u2018async let await\u2019\u201d. There is none.\n\n\nThe `await` is unnecessary at this point and is generally omitted. One can argue that in `async let x = await \u2026` declaration, the `await` would best be omitted, to avoid confusion, because it does not actually `await` at that point.\n\n\nSo, the behavior you outline in your revision to your question is correct.\n\n\n\n```\nasync let resultA = myAsyncFuncA() \/\/ execution of this current task is not suspended\nasync let resultB = await myAsyncFuncB() \/\/ execution is also not suspended here\nprint(\"Both have been triggered\") \/\/ we can see this before the above two child tasks finish\nawait print(resultA, resultB) \/\/ only here is this task suspended, awaiting those two child tasks\n\n```\n\nWhen `resultA` and `resultB` are declared with `async let`, the respective asynchronous child tasks will be created, but the current task will *not* be suspended (notably, despite the `await` in `async let resultB = await \u2026`). Execution of the current task continue to proceed to the subsequent lines, after those two initializations, while A and B run concurrently. Execution will not actually await the results until you hit the `await` on the fourth line, the `await print(\u2026)`. The `await` in the second line, where you `async let resultB = await \u2026`, does *not* actually await it.\n\n\nIn [SE-0317 \u2013 `async let` bindings](https:\/\/github.com\/apple\/swift-evolution\/blob\/main\/proposals\/0317-async-let.md), they say that the \u201cinitializer of a `async let` permits the omission of the `await` keyword\u201d. Describing this as \u201cpermits\u201d is an understatement; we practically always omit the `await` at the point of initialization."}
{"questionId":"383475cacb281cba53d9692a9f8bb4c012ea7ecc90ae23e80369e68bc9e40109","question":"How to unpack a string into multiple columns in a Polars DataFrame using expressions?\nI have a Polars DataFrame containing a column with strings representing 'sparse' sector exposures, like this:\n\n\n\n```\ndf = pl.DataFrame(\n pl.Series(\"sector_exposure\", [\n \"Technology=0.207;Financials=0.090;Health Care=0.084;Consumer Discretionary=0.069\", \n \"Financials=0.250;Health Care=0.200;Consumer Staples=0.150;Industrials=0.400\"\n ])\n)\n\n```\n\n\n\n| sector\\_exposure |\n| --- |\n| Technology=0.207;Financials=0.090;Health Care=0.084;Consumer Discretionary=0.069 |\n| Financials=0.250;Health Care=0.200;Consumer Staples=0.150;Industrials=0.400 |\n\n\nI want to \"unpack\" this string into new columns for each sector (e.g., Technology, Financials, Health Care) with associated values or a polars struct with sector names as fields and exposure values.\n\n\nI'm looking for a more efficient solution using polars expressions only, without resorting to Python loops (or python mapped functions). Can anyone provide guidance on how to accomplish this?\n\n\nThis is what I have come up with so far - which works in producing the desired struct but is a little slow.\n\n\n\n```\n(\n df[\"sector_exposure\"]\n .str\n .split(\";\")\n .map_elements(lambda x: {entry.split('=')[0]: float(entry.split('=')[1]) for entry in x},\n skip_nulls=True,\n )\n)\n\n```\n\nOutput:\n\n\n\n```\nshape: (2,)\nSeries: 'sector_exposure' [struct[6]]\n[\n {0.207,0.09,0.084,0.069,null,null}\n {null,0.25,0.2,null,0.15,0.4}\n]\n\n```\n\nThanks!","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"There are potentially two ways to do it that I can think of.\n\n\n## Regex extract\n\n\n\n```\ndf.with_columns(pl.col('sector_exposure').str.extract(x+r\"=(\\d+\\.\\d+)\").cast(pl.Float64).alias(x) \n for x in [\"Technology\", \"Financials\", \"Health Care\", \"Consumer Discretionary\",\n \"Consumer Staples\",\"Industrials\"])\n\nshape: (2, 7)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 sector_exposur \u2506 Technology \u2506 Financials \u2506 Health Care \u2506 Consumer \u2506 Consumer \u2506 Industrials \u2502\n\u2502 e \u2506 --- \u2506 --- \u2506 --- \u2506 Discretionary \u2506 Staples \u2506 --- \u2502\n\u2502 --- \u2506 f64 \u2506 f64 \u2506 f64 \u2506 --- \u2506 --- \u2506 f64 \u2502\n\u2502 str \u2506 \u2506 \u2506 \u2506 f64 \u2506 f64 \u2506 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 Technology=0.2 \u2506 0.207 \u2506 0.09 \u2506 0.084 \u2506 0.069 \u2506 null \u2506 null \u2502\n\u2502 07;Financials= \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 0.090;Health \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 Care=0.084;Con \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 sumer Discreti \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 onary=0.069 \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 Financials=0.2 \u2506 null \u2506 0.25 \u2506 0.2 \u2506 null \u2506 0.15 \u2506 0.4 \u2502\n\u2502 50;Health Care \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 =0.200;Consume \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 r Staples=0.15 \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 0;Industrials= \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 0.400 \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\nIn this one we're counting on all the numbers being decimal (you could tweak the regex to get around this a bit) and all the sectors being prespecified in the generator within `with_columns`\n\n\n## Split and pivot\n\n\n\n```\n(\n df\n .with_columns(str_split=pl.col('sector_exposure').str.split(';'))\n .explode('str_split')\n .with_columns(\n pl.col('str_split')\n .str.split('=')\n .list.to_struct(fields=['sector','value'])\n )\n .unnest('str_split')\n .pivot(values='value',index='sector_exposure',columns='sector',aggregate_function='first')\n .with_columns(pl.exclude('sector_exposure').cast(pl.Float64))\n )\nshape: (2, 7)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 sector_exposur \u2506 Technology \u2506 Financials \u2506 Health Care \u2506 Consumer \u2506 Consumer \u2506 Industrials \u2502\n\u2502 e \u2506 --- \u2506 --- \u2506 --- \u2506 Discretionary \u2506 Staples \u2506 --- \u2502\n\u2502 --- \u2506 f64 \u2506 f64 \u2506 f64 \u2506 --- \u2506 --- \u2506 f64 \u2502\n\u2502 str \u2506 \u2506 \u2506 \u2506 f64 \u2506 f64 \u2506 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 Technology=0.2 \u2506 0.207 \u2506 0.09 \u2506 0.084 \u2506 0.069 \u2506 null \u2506 null \u2502\n\u2502 07;Financials= \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 0.090;Health \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 Care=0.084;Con \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 sumer Discreti \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 onary=0.069 \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 Financials=0.2 \u2506 null \u2506 0.25 \u2506 0.2 \u2506 null \u2506 0.15 \u2506 0.4 \u2502\n\u2502 50;Health Care \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 =0.200;Consume \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 r Staples=0.15 \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 0;Industrials= \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2502 0.400 \u2506 \u2506 \u2506 \u2506 \u2506 \u2506 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\nIn this one you do a \"round\" of splitting at the semi colon and then explode. Then you split again on the equal but you turn that into a struct which you then unnest. From there you pivot the sectors up to columns.\n\n\nIf the sectors existed in the same order then you could use `str.extract_groups` but with varying orders I don't think it works."}
{"questionId":"531b5516c120198ce9383afed99b692ef99c74f80dece02554711b0e5ccb331d","question":"Plotly plot does not render in viewer pane\nTrying to use plotl\\_ly() in Rstudio and it refuses to render in the viewer pane as expected.\n\n\nTried uninstalling plotly, R and Rstudio and the problem persists. Anyone else have the same problem?\n\n\nTried running a minimal example like so and I get a white screen in my viewer pane instead of a plot.\n\n\n\n```\nlibrary(plotly)\n\nplot_ly(x = 1:10, y = 1:10)\n\n```\n\nHere is my session info:\n\n\n\n```\nR version 4.4.0 (2024-04-24 ucrt)\nPlatform: x86_64-w64-mingw32\/x64\nRunning under: Windows 11 x64 (build 22631)\n\nMatrix products: default\n\n\nlocale:\n[1] LC_COLLATE=English_Canada.utf8 LC_CTYPE=English_Canada.utf8 LC_MONETARY=English_Canada.utf8\n[4] LC_NUMERIC=C LC_TIME=English_Canada.utf8 \n\ntime zone: America\/Toronto\ntzcode source: internal\n\nattached base packages:\n[1] stats graphics grDevices utils datasets methods base \n\nother attached packages:\n[1] plotly_4.10.4 ggplot2_3.5.1\n\nloaded via a namespace (and not attached):\n [1] vctrs_0.6.5 httr_1.4.7 cli_3.6.2 rlang_1.1.3 purrr_1.0.2 \n [6] generics_0.1.3 jsonlite_1.8.8 data.table_1.15.4 glue_1.7.0 colorspace_2.1-0 \n[11] htmltools_0.5.8.1 scales_1.3.0 fansi_1.0.6 grid_4.4.0 crosstalk_1.2.1 \n[16] munsell_0.5.1 tibble_3.2.1 fastmap_1.1.1 yaml_2.3.8 lifecycle_1.0.4 \n[21] compiler_4.4.0 dplyr_1.1.4 htmlwidgets_1.6.4 pkgconfig_2.0.3 tidyr_1.3.1 \n[26] rstudioapi_0.16.0 digest_0.6.35 viridisLite_0.4.2 R6_2.5.1 tidyselect_1.2.1 \n[31] utf8_1.2.4 pillar_1.9.0 magrittr_2.0.3 withr_3.0.0 tools_4.4.0 \n[36] gtable_0.3.5 lazyeval_0.2.2 ","questionMetadata":{"type":"version","tag":"r","level":"intermediate"},"answer":"User @sactyr's comment\n\n\n\n> \n> This issue has been recently fixed, happens in RStudio when using R version 4.4.0 with all plot and HTML objects. See the [issue](https:\/\/github.com\/rstudio\/rstudio\/issues\/14603) here. Looks like issue is fixed by installing [patched build](https:\/\/cran.r-project.org\/bin\/windows\/base\/rpatched.html).\n> \n> \n> \n\n\nsolved my issue. Installing the *unofficial* [patched build](https:\/\/cran.r-project.org\/bin\/windows\/base\/rpatched.html) brings back `plotly` to the viewer pane. Seems like the issue is indeed related to [R version 4.4.0](https:\/\/github.com\/rstudio\/rstudio\/issues\/14603)."}
{"questionId":"0837651d479857180c49106317bf2b69ae29a1229d43ade10c4460726d236bd4","question":"What is the correct way to obtain a String from a Foreign Function that returns a char pointer\nIs there an efficient way to obtain a Java string from a Foreign Function that returns a C-style `char` pointer?\n\n\nFor example the SQLite library contains a function to return the library version number:\n\n\n\n```\nSQLITE_API const char *sqlite3_libversion(void);\n\n```\n\nUsing Java's Foreign Function and Memory API I can call this function like so:\n\n\n\n```\nfinal MemorySegment ms = this.symbolLookup.find(\"sqlite3_libversion\")\n .orElseThrow(() -> new RuntimeException(\"Could not find method 'sqlite3_libversion\"));\nfinal FunctionDescriptor fd = FunctionDescriptor.of(ValueLayout.ADDRESS);\nfinal Linker linker = Linker.nativeLinker();\nfinal MemorySegment result = (MemorySegment)linker.downcallHandle(ms, fd).invoke();\ntry (final Arena arena = Arena.ofConfined()){\n final MemorySegment ptr = result.reinterpret(10, arena, null);\n return ptr.getUtf8String(0);\n}\n\n```\n\nThe problem with this is that I have created a new `MemorySegment` of an arbitrary size 10. This is fine for this example but what is the correct way to obtain a String from a `char *` when I have no idea of the size of the char array?","questionMetadata":{"type":"implementation","tag":"java","level":"intermediate"},"answer":"You should be able to re-interpret the `MemorySegment` to a size for the UTF-8 conversion, with suitable value for `byteSize`. Some APIs\/libraries may have documentation or header file definition which gives you the expected size:\n\n\n\n```\n\/\/ JDK21:\nreturn result.reinterpret(byteSize).getUtf8String(0);\n\n\/\/ JDK22:\nreturn result.reinterpret(byteSize).getString(0);\n\n```\n\nThe `reinterpret` call does not re-allocate a chunk of memory with size `byteSize` - it just returns a `MemorySegment` that permits access to that range.\n\n\nExample JDK22 which uses large size:\n\n\n\n```\nprivate static final SymbolLookup SQLITE = SymbolLookup.libraryLookup(\"sqlite3\", Arena.global());\nprivate static final MemorySegment MS = SQLITE.find(\"sqlite3_libversion\")\n .orElseThrow(() -> new RuntimeException(\"Could not find method 'sqlite3_libversion\"));\nprivate static final Linker LINKER = Linker.nativeLinker();\nprivate static final MethodHandle MH = LINKER.downcallHandle(MS, FunctionDescriptor.of(ValueLayout.ADDRESS));\n\npublic static void main(String... args) throws Throwable {\n final MemorySegment result = (MemorySegment)MH.invoke();\n String ver = result.reinterpret(Integer.MAX_VALUE).getString(0);\n System.out.println(\"SQLITE version \"+ver);\n}\n\n```\n\nNote that using [jextract](https:\/\/jdk.java.net\/jextract\/) simplifies setting up the bindings."}
{"questionId":"193056785aea96683873a196c44a86238be4cff9adbddff333b568546e565cfb","question":"Check whether T is a function type\nI have been able to emulate some C++ traits like `is_null_pointer`, `is_integral`, `is_floating_point`, and `is_array` with `_Generic`, but am at a loss at `is_function`.\n\n\nIs it possible to emulate `is_function` in C? Say given an expression, I'd like `_Generic` to evaluate to 1 if it is a function name, else 0. And as a function name and a function pointer are interchangeable in most cases, the behavior should be the same for a function pointer.\n\n\nI need this because I have some macros that expect a function. I'd like to assert at compile-time that the argument is a function.","questionMetadata":{"type":"implementation","tag":"c","level":"intermediate"},"answer":"How about this (ISO C23):\n\n\n\n```\n#define IS_FUNCTION(T) \\\n _Generic((T), \\\n typeof(T)*: true, \\\n default : false)\n\n```\n\nThe trick here is that there are only two types in C that \"decay\" whenever used in an expression, arrays and functions. An array will however decay into a pointer to the first element, while a function will decay into a pointer to that function type.\n\n\nThe operand of `typeof` does not cause decay. So if passing an array like `int [10]` to the macro, The `(T)` will decay into `int*` but `typeof(T)` will give `int [10]`. And then in turn `typeof(T)*` will be an `int (*)[10]`.\n\n\nBut in case of a function, you end up with a function pointer during `typeof(T)*`, the same type as `(T)`. And in case you pass an explicit function pointer, then `typeof(T)*` is a pointer to function pointer.\n\n\nSome tests:\n\n\n\n```\n#include <stdio.h>\n\n#define IS_FUNCTION(T) \\\n _Generic((T), \\\n typeof(T)*: true, \\\n default : false)\n\n#define TEST(T) printf(#T \" is %sa function\\n\", IS_FUNCTION(T)?\"\":\"not \")\n\nvoid func (void);\n\nint main(void)\n{\n int (*fptr)(void);\n int array[10];\n int x;\n\n TEST(func);\n TEST(main);\n TEST(fptr);\n TEST(array);\n TEST(x);\n TEST(nullptr);\n}\n\n```\n\nOutput:\n\n\n\n```\nfunc is a function\nmain is a function\nfptr is not a function\narray is not a function\nx is not a function\nnullptr is not a function"}
{"questionId":"7339c9255a2f03feb8f5875247dae849d04e8d14cd3bbc5e90ea6e20909bedd7","question":"Why does direct initialization use a const lvalue reference qualified conversion function?\nI have two classes `st` and `foo`:\n\n\n\n```\nstruct st {\n st() = default;\n\n st(const st&) {\n std::cout << \"copy ctor.\" << std::endl;\n }\n\n st(st&&) {\n std::cout << \"move ctor.\" << std::endl;\n }\n};\n\nstruct foo {\n operator st&() & {\n return s;\n }\n\n operator const st&() const& {\n return s;\n }\n\n operator st&&() && {\n return std::move(s);\n }\n\n operator const st&&() const&& {\n return std::move(s);\n }\n\n st s;\n};\n\n```\n\nWhen I run code like this:\n\n\n\n```\nst s = st(foo());\n\n```\n\nIt calls the copy constructor of `st`.\n\n\nWhy does it not call `foo`'s rvalue-reference-qualified conversion function and use `st`'s move constructor?\n\n\n\n\n---\n\n\n**EDIT**:\n\n\n- `MSVC` can compile this, but `GCC` and `Clang` cannot.\n- If I remove `const & operator`, then `MSVC`, `GCC`, `Clang` can compile.\n- If I remove `&& operator`, then `MSVC` and `GCC` can compile, but `Clang` cannot.\n- If I remove `&& operator` and `const && operator`, then `MSVC`, `GCC`, `Clang` can compile.","questionMetadata":{"type":"conceptual","tag":"c++","level":"advanced"},"answer":"Calling the copy constructor is wrong. The compiler is not following the specification.\n\n\nThe call is ill-formed, because overload resolution should be ambiguous.\n\n\nThe problem is quite generally that if you have two viable overloads, in this case the constructors\n\n\n\n```\nst(const st&);\n\n```\n\nand\n\n\n\n```\nst(st&&);\n\n```\n\nand both overloads require a user-defined conversion sequence in the argument where both use a *different* conversion function or constructor, the two conversion sequences are *always* considered equally good. How good the standard conversion sequences and the reference binding involved are, is only considered if the two user-defined conversion sequences would use the *same* conversion operator or constructor."}
{"questionId":"c30f69c4bf339f13ab500fe3f0ebef0332e57f6f7f5d7813d2856a0b4e18e05f","question":"Issue with using a Context Provider in the onMessage() method of react-native-firebase\/messaging?\nAs a newcomer in the developer world and currently working on a React Native Expo project, I'm seeking your help for the first time.\n\n\nWe receive FCM notifications in the app; we want to display these notifications with a Toast. But how do we prevent the Toast from being displayed if the user is already on the right page and can see the interface update? Does the 'onMessage' callback from FCM not have access to the context provider?\n\n\nMy objective is to prevent the appearance of a notification if the user is on a certain page (in this case, a conversation). The packages used are react-native-firebase (for backend and reception) and Notifee for displaying notifications.\nThe issue i'm encountering is that when a new message is posted in a conversation, users receive a notification even if they are on the page at that moment.\n\n\nThus, I need to determine on which page the user is and save it.\nI came up with the idea of using the name of the page concatenated with the conversation ID within the existing provider to know when to push the notification or not.\n\n\nThe problem is that in the messaging().onMessage() method of firebase\/messaging, it seems that the Context is not recognized and always appears empty.\n\n\n**Here's the relevant code:**\n\n\n\n```\nuseEffect(() => {\n const {currentPage} = useContext(DataContext);\n console.log('current page on HomeScreen', currentPage);\n ...\n\n if (!isExpoGo) {\n if (requestUserPermission()) {\n messaging().getToken().then((token) => {\n sendPushToken(token, user);\n });\n } else {\n console.log('Messaging: failed token status');\n }\n\n \/\/ Assume a message-notification contains a \"type\" property in the data payload of the screen to open\n messaging().onNotificationOpenedApp((remoteMessage) => {\n console.log(\n 'Notification caused app to open from background state:',\n remoteMessage.notification,\n );\n \/\/ navigation.navigate(remoteMessage.data.type);\n console.log('App should navigate : ', remoteMessage.data);\n });\n\n \/\/ Check whether an initial notification is available\n messaging()\n .getInitialNotification()\n .then((remoteMessage) => {\n if (remoteMessage) {\n console.log(\n 'Notification caused app to open from quit state:',\n remoteMessage.notification,\n );\n console.log('App woke up : ', remoteMessage.data);\n }\n });\n\n ...\n\n const unsubscribe = messaging().onMessage((remoteMessage) => {\n ...\n console.log('current Page FCM :', currentPage);\n ...\n });\n\n return unsubscribe;\n }\n }, []);\n\n```\n\n**And here are the log returns:**\n\n\n\n```\n LOG current page on HomeScreen conversation_event\/66260fb009ea6ba6a551c23b \n LOG forground current Page :\n\n```\n\nIf you have any suggestions, I'd greatly appreciate them as I've been grappling with this issue for several days without\u00a0success.","questionMetadata":{"type":"implementation","tag":"javascript","level":"intermediate"},"answer":"i think you need use navigationRef.getCurrentRoute() to stop action\nor any route that you used\n\n\n\n```\n import {\n StackActions,\n CommonActions,\n createNavigationContainerRef,\n} from '@react-navigation\/native';\n\n export const navigationRef = createNavigationContainerRef();\n\n const currentRouteName = navigationRef.getCurrentRoute()?.name;\n\n const isFocused = currentRouteName === \"pageNameYouWant \"\n return isFocused == true ? null : unsubscribe;\n\n```\n\nI hope I was useful and good luck \ud83e\udd1e\n\n\n======>>>> update ======>>>>\n\n\n1- first create NavigationActions.js file\n\n\n2- add\n\n\n\n```\nexport const navigationRef =createNavigationContainerRef();\nexport const isReadyRef = React.createRef();\n\n```\n\n3- make sure you use navigationRef inside NavigationContainer in App.js\n\n\nlike\n\n\n\n```\n <NavigationContainer\n theme={theme === 'Dark' ? darkMode : lightMode}\n ref={navigationRef}\n onReady={() => {\n isReadyRef.current = true;\n }}\n onStateChange={async () => onStateChange()}>"}
{"questionId":"dae74f2c3ea9a7dcd7c8383d95b23cff8d4ab12f64c81acc90b2c71a1354255b","question":"Extracting a value from a set of columns based on the minimum value within another set of columns in R\nIn my data frame, I have 15 columns:\n\n\n1. Subject ID\n2. A set of 7 columns with the subject's age at particular time points (age1, age2, etc)\n3. A set of 7 columns with the subject's scores at those particular time points (corresponding to the ages above; score1, score2, etc).\n\n\nMost participants only have age1 and score1 (i.e., they only obtained a score at a single time point), but some will have more if they were tested at multiple time points.\n\n\nI would like to create two new columns:\n\n\n1. minScore: The minimum value out of out of columns score1:score7, ignoring NAs.\n2. scoreAge: The subject's age corresponding to the time point of their minimum score. For example, if a subject's lowest score is the value of score3, I want this column to have the value of age3, etc. This could be NA if the subject's age is missing for that time point.\n\n\n\n```\ndata <- structure(list(subject_id = c(\"191-11173897\", \"191-11561329\", \n\"191-11700002\", \"191-11857141\", \"191-11933910\"), age1 = c(39, \n7, NA, NA, 16), age2 = c(36, NA, NA, NA, 37), age3 = c(9, NA, \nNA, NA, NA), age4 = c(NA_real_, NA_real_, NA_real_, NA_real_, \nNA_real_), age5 = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_\n), age6 = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), \n age7 = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_\n ), score1 = c(10.6, 12.1, 9.8, NA, 10.6), score2 = c(9.8, \n NA, NA, NA, 11), score3 = c(11.3, NA, NA, NA, NA), score4 = c(NA_real_, \n NA_real_, NA_real_, NA_real_, NA_real_), score5 = c(NA_real_, \n NA_real_, NA_real_, NA_real_, NA_real_), score6 = c(NA_real_, \n NA_real_, NA_real_, NA_real_, NA_real_), score7 = c(NA_real_, \n NA_real_, NA_real_, NA_real_, NA_real_)), row.names = c(NA, \n-5L), class = c(\"tbl_df\", \"tbl\", \"data.frame\"))","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"With `rowwise` and `sort`\/`order` to avoid warnings and `Inf` in case of all `NA`s\n\n\n\n```\nlibrary(dplyr)\n\ndata %>% \n rowwise() %>% \n mutate(minScore = sort(c_across(score1:score7))[1], \n scoreAge = c_across(age1:age7)[order(c_across(score1:score7))[1]]) %>% \n ungroup()\n\n```\n\noutput\n\n\n\n```\n# A tibble: 5 \u00d7 17\n subject_id age1 age2 age3 age4 age5 age6 age7 score1 score2 score3\n <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n1 191-11173897 39 36 9 NA NA NA NA 10.6 9.8 11.3\n2 191-11561329 7 NA NA NA NA NA NA 12.1 NA NA \n3 191-11700002 NA NA NA NA NA NA NA 9.8 NA NA \n4 191-11857141 NA NA NA NA NA NA NA NA NA NA \n5 191-11933910 16 37 NA NA NA NA NA 10.6 11 NA \n score4 score5 score6 score7 minScore scoreAge\n <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n1 NA NA NA NA 9.8 36\n2 NA NA NA NA 12.1 7\n3 NA NA NA NA 9.8 NA\n4 NA NA NA NA NA NA\n5 NA NA NA NA 10.6 16"}
{"questionId":"130487ad4173a94f789f84286ba06b76f6b059345807fcf0c72f535870cad51f","question":"Filtering a matrix for unique combination in R\nI'm trying to filter such that once a unique number from a pair of number is found starting from the top of the matrix, any subsequent pair entry is removed from the matrix leaving only the the filtered data.\n\n\n\n```\nT_data = c(7,9,8,10,2,10,5,9,1,8,2,1,4,7,5,4,2,5)\n\nT_new = matrix(T_data,ncol=2,byrow=TRUE)\n\n```\n\ndesired output:\n7 9, 8 10, 2 1, 5 4\n\n\n\n```\n# Desired output\n> matrix(c(7, 9, 8, 10, 2, 1, 5, 4), ncol = 2, byrow = TRUE)\n [,1] [,2]\n[1,] 7 9\n[2,] 8 10\n[3,] 2 1\n[4,] 5 4\n\n```\n\nI've tried writing my own loop but I presume there is a simple way to do this in R?","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"You'll probably get the best performance with a well-designed `for` loop:\n\n\n\n```\nuniquemat <- function(x) {\n y <- array(match(c(x), u <- unique(c(x))), dim(x))\n u <- logical(length(u))\n k <- logical(nrow(x))\n u[y[1,]] <- k[1] <- TRUE\n for (i in 2:nrow(x)) if (!any(u[y[i,]])) u[y[i,]] <- k[i] <- TRUE\n x[k,]\n}\n\nuniquemat(T_data)\n#> [,1] [,2]\n#> [1,] 7 9\n#> [2,] 8 10\n#> [3,] 2 1\n#> [4,] 5 4\n\n```\n\nBenchmarking with a larger dataset:\n\n\n\n```\nfReduce <- function(x) { # from SamR\n Reduce(\\(x,y) \n if(any(x %in% y)) x else c(x, y), \n asplit(x, 1)\n ) |> matrix(ncol = ncol(x), byrow = TRUE)\n}\n\nT_data <- matrix(sample(1e4, 1e4, 1), ncol = 2)\n\nbench::mark(\n uniquemat = uniquemat(T_data),\n fReduce = fReduce(T_data)\n)\n#> # A tibble: 2 \u00d7 6\n#> expression min median `itr\/sec` mem_alloc `gc\/sec`\n#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl>\n#> 1 uniquemat 4.3ms 4.68ms 191. 570KB 29.8\n#> 2 fReduce 274.2ms 274.88ms 3.64 201MB 18.2\n\n```\n\nUsing `Reduce` this way gets very slow for large matrices because `x` grows iteratively, [which is bad](https:\/\/stackoverflow.com\/q\/73049791\/9463489). A final performance check on a matrix with 100K rows:\n\n\n\n```\nT_data <- matrix(sample(1e5, 1e5, 1), ncol = 2)\n\nbench::mark(\n uniquemat = uniquemat(T_data),\n fReduce = fReduce(T_data)\n)\n#> # A tibble: 2 \u00d7 6\n#> expression min median `itr\/sec` mem_alloc `gc\/sec`\n#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl>\n#> 1 uniquemat 86.5ms 99.8ms 7.41 5.18MB 14.8\n#> 2 fReduce 24s 24s 0.0416 19.53GB 10.5"}
{"questionId":"9d1592aa5076b1e334c97d9be052ca5d36e50f77b55552876e1f6d616d4a0a87","question":"CALL with a modified return address\nWhat's the best way to handle a CALL in x64 assembly, that should return to a slightly shifted return address? Mainly concerning efficiency\/execution speed. I'll briefly explain what I'm trying to do.\n\n\n**Background**\n\n\nI have a custom, interpreted visual scripting language, that gets compiled to native code. This language has builtin stack-based coroutines, and previously they were still handled semi-interpreted (with a separate stack-class to store the coroutine-data). I'm in the process of nativizing it entirely, so that only RSP is used.\n\n\nOne part of those coroutines is the ability for nested yielding, meaning if a coroutine calls another yielding method, that method can internally yield to suspend the entire invokation. This information is handled via a \"YieldState\" struct, stored in an register. That means, that for the new fully nativized variant, we can just call a yielding method from a coroutine with a call-instruction:\n\n\n\n```\ncall 12345; \/\/ [rip+12345] => yieldingMethod\n\n```\n\nAt least, in theory. As our coroutines are stack-based, we store local variables plainly on the stack, not in some sort of class like stackless coroutines might do. This requires cleanup (in case the coroutine is destroyed before finishing) to be handled via another method, which I called \"interrupt handler\". Such interrupt-handler being invoked is quite common in my practical use-case, but not overly so. So my goal was to provide something that is faster than an exception-handler (which usually requires some global lookup of the frame), but doesn't require explicitely setting this address for each call. So what I did was embedd the interrupt-handler address between the call and the return-address - since for the old version of the code, we had to load the return manually, this was not a problem:\n\n\n\n```\nlea rcx,[rip+25]; \/\/ 25 is the assumed byte-size up until the return address\nmov rdx,rbx; \/\/ load non-native call stack\ncall prepareMethodYielding; \/\/ stores return-address on stack\njmp 12345; \/\/ actually call our \"yieldingMethod\"\nmov r15,interruptAddress;\n\n```\n\nThe last instruction is never executed - we lea the return address to actually skip it. We only have it here to be able to lookup the interrupt-handler. Given a resume-address, we can just decrement the pointer by 8, and we have the address of that resumes interrupt. The \"mov r15\" in our case is just to allow us to disassemble the code properly; we could just embedd the address alone, but that would confuse any external disassembler.\n\n\n**The actual problem**\n\n\nNow in the new version, there is no \"prepareMethodYielding\", but only a call - at least, optimally. But \"call\" in itself doesn't allow us to do a modified return-address, so here I'm faced with a few options, and I want to know which one is the best.\n\n\n*Option A - lea + push + jmp*\n\n\nOur first option is to simulate the \"call\", but push the return-address manually:\n\n\n\n```\nlea rax,[rip+10h]\npush rax\njmp A6 \/\/ yieldingMethod\n\n```\n\nThis requires 3 instructions, but no access to memory.\n\n\n*Option B - push from memory*\n\n\nWe could reduce the number of options, by storing the return-address in some area of constant-memory:\n\n\n\n```\npush qword ptr[rip+1234] \/\/ return-address stored here\njmp A6 \/\/ yieldingMethod\n\n```\n\nNow we need only one push an no intermediate register, though now we need an access to memory, which could potentially be further away in the data-section.\n\n\n*Option C - modify the return address in the called function*\n\n\nAnother option that I see would be to adjust the return-address that is produced by call inside the called method. All those methods here are compiled using my own calling convention, so they don't adhere to x64 or any other.\n\n\n\n```\n\/\/ caller\ncall A6 \/\/ yielding method\n\n\/\/ callee, first instruction\nadd qword ptr[rsp],10 \/\/ size of interrupt-embedding is always the same\n\n```\n\nThis would also only be one instruction, with a small encoding. Though just from a design point of view, I don't like it very much, since it couples the information about the embedding of the callee into the caller - though, if this was the most efficient variant, I might still go for it.\n\n\n*Option D - don't modify the return-address at all*\n\n\nOur last option is to not modify the return-address at all, but instead change how lookup and return is handled.\n\n\n\n```\ncall 12345; \/\/ yieldingMethod\nmov r15,interruptAddress; \/\/ is actually executed now (but value is not used)\n\n```\n\nSo here, we would change where we lookup the interrupt-address (as the return-address now points in front of the fake instruction, instead of behind it). Then, upon return from the call, we would execute the movabs instruction, but discard the value is loaded.\nThe upside here is that overall code-size is the smallest, since we don't need to add any additional instructions that aren't already there. However, we are executing a 10-byte mov instruction, which could be slower than some of the other variants. It kind of depends here on what the CPU is doing - if it already decodes the fake instruction, even if it doesn't directly reach it, it might be the best idea to just execute it, instead of modifying the return address. Same thing, if the CPU can somehow detect that the instruction has no effect, as it's value is never read, during register renaming, then it could effectively be free - atm, I'm using a register that is not used, to distinquish for my own assembler; but then it would probably make sense to use a register that is overwritten soon after, I assume.\nThough I'm unsure on what would actually happen here.\n\n\n**Conclusion**\n\n\nSo, which of those 4 options seems the most efficient to you? I'm also open to other ideas, though the general design of how the coroutines are done is finished and functional, so something like using a statemachine-based approach which IIRC some coroutines use, is not really an option here.","questionMetadata":{"type":"optimization","tag":"assembly","level":"advanced"},"answer":"Here's a variant of option D that could work:\n\n\nThe x86 architecture has a long nop instruction `nop r\/m32`, which performs no effect. The operand of this instruction is ignored and can be a memory operand. If you use this instruction with a modr\/m operand that has a 32 bit displacement, you can effectively embed a 32 bit number in the instruction stream with no harm.\n\n\nWhile your interrupt address is a 64 bit address, it could be possible to express it as a 32 bit distance from some base address, permitting you to get away with a shorter encoding. Or use a pair of such long nop instructions to encode the full 64 bits.\n\n\nThis could look like:\n\n\n\n```\ncall 12345; \/\/ yielding method\nnop [rdx+interruptAddress-baseAddress] \/\/ encode interruptAddress\n\n```\n\n\n\n---\n\n\nAn advantage to this is that running one NOP after return is even cheaper than an instruction to modify the return address. More importantly, it avoids a mispredict from the [return-address stack predictor](https:\/\/blog.stuffedcow.net\/2018\/04\/ras-microbenchmarks\/#call0) which assumes that `call` and `ret` will be paired the normal way."}
{"questionId":"e83c89dfc6a77c760db5941a7e47abd44e136c3fa35e9491ad352ca623ae31ce","question":"what's use-system-variables in angular material\nwhen I try to generate a custom theme in `@angular\/material@18` using `nx generate @angular\/material:m3-theme` it asks me for this question\n\n\n\n> \n> \u221a Do you want to use system-level variables in the theme? System-level variables make dynamic theming easier through CSS custom properties, but increase the bundle size. (y\/N) \u00b7 true\n> \n> \n> \n\n\nwhen it's true it adds `use-system-variables: true`\n\n\n\n```\n $light-theme: mat.define-theme((\n color: (\n theme-type: light,\n primary: $_primary,\n tertiary: $_tertiary,\n+ use-system-variables: true,\n ),\n+ typography: (\n+ use-system-variables: true,\n+ ),\n ));\n\n```\n\nmy question, what exactly does `use-system-variables` do? I can't find any documentation about it","questionMetadata":{"type":"version","tag":"typescript","level":"intermediate"},"answer":"To add on to the other answer:\n\n\nThese have originated from [Material Design Tokens](https:\/\/m3.material.io\/foundations\/design-tokens\/how-to-read-tokens#20829697-fd3d-4802-b295-96ba564f2e50).\n\n\nThere are three kinds of tokens in Material:\n\n\n### Reference tokens\n\n\n\n> \n> All available tokens with associated values\n> \n> \n> \n\n\n### System tokens\n\n\n\n> \n> Decisions and roles that give the design system its character, from color and typography, to elevation and shape\n> \n> \n> \n\n\n### Component tokens\n\n\n\n> \n> The design attributes assigned to elements in a component, such as the color of a button icon\n> \n> \n> \n\n\nWith three kinds of tokens, teams can update design decisions globally or apply a change to a single component.\n\n\n\n\n---\n\n\n### System tokens\n\n\n\n> \n> Subsystem tokens begin with md.sys.\n> \n> \n> These are the decisions that systematize the design language for a specific theme or context.\n> \n> \n> System tokens define the purpose that a reference token serves in the UI.\n> \n> \n> When applying theming, a system token can point to different reference tokens depending on the context, such as a light or dark theme. Whenever possible, system tokens should point to reference tokens rather than static values.\n> \n> \n> \n\n\nThe code to implement the tokens will look like below:\n\n\n\n```\n@use 'sass:map';\n@use '@angular\/material' as mat;\n\n$light-theme: mat.define-theme((\n color: (\n theme-type: light,\n primary: mat.$azure-palette,\n tertiary: mat.$blue-palette,\n use-system-variables: true,\n ),\n typography: (\n use-system-variables: true,\n ),\n));\n\n\n@include mat.core();\n@include mat.color-variants-backwards-compatibility($light-theme);\n\n:root {\n @include mat.all-component-themes($light-theme);\n @include mat.system-level-colors($light-theme);\n @include mat.system-level-typography($light-theme);\n}\n\n```\n\nAs for what it does, it looks like it a highest css variable definition with the prefix `--sys`, this is being used by all the other global material styles.\n\n\n### Output After Configuring:\n\n\n\n```\n--mdc-plain-tooltip-supporting-text-font: var(--sys-body-small-font);\n...\n--sys-body-small-font: Roboto, sans-serif;"}
{"questionId":"99f80bc037b32ce1c722bf63244a3534d7d13ad40b69f6101db76877721a5201","question":"Algorithm for transformation of an 1-D-gradient into a special form of a 2-D-gradient\nAssuming there is a 1-D array\/list which defines a color gradient I would like to use it in order to create a 2-D color gradient as follows:\n\n\nLet's for simplicity replace color information with a single numerical value for an example of a 1-D array\/list:\n\n\n\n```\n[ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ]\n\n```\n\nTo keep the gradient progressing diagonally with progress of the largest next value diagonally over the entire array I would like to transform the 1-D sequence into a 2D-array with deliberately chosen shape (i.e. width\/height, i.e. number of rows x number of columns where row \\* columns == length of the 1-D gradient array) as follows:\n\n\n\n```\n[[ 1 2 4 ]\n [ 3 6 7 ]\n [ 5 9 10 ]\n [ 8 12 13 ]\n [ 11 14 15 ]]\n\n```\n\nor\n\n\n\n```\n[[ 1 2 4 7 10 ]\n [ 3 6 9 12 13 ]\n [ 5 8 11 14 15 ]]\n\n```\n\nor starting from a sequence:\n\n\n\n```\n[ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16]\n\n```\n\nto\n\n\n\n```\n[[ 1 2 4 7 ]\n [ 3 6 9 11 ]\n [ 5 10 13 14 ]\n [ 8 12 15 16 ]]\n\n```\n\nIs there a ready-to-use out of the box Python module or C-library capable to perform such reshaping of an array or need this special case be coded by hand? And if coding the loops by hand is necessary, what would be the most efficient way of doing this as the sequence I would like to transform is 256\u00b3 large in size? I there maybe already ready for use code for such reshaping\/transformation out there in the deep space of the Internet I have failed to find asking both the search engines and the LLMs?","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"### General idea\n\n\nFrom what I can see, this can be done in three steps:\n\n\n1. Split the sequence in fragments as if they were diagonals of an array of a given shape.\n2. Separate the elements of each fragment by the parity of their index.\n3. Assemble a new array from the modified diagonal fragments.\n\n\n### Step 1. Split the sequence into diagonal fragments\n\n\nI think it will be enough to find the stopping points, so then we can slice the sequence with them. For this, we can apply the cumulative sum to a sequence of diagonal lengths:\n\n\n\n```\nimport numpy as np\nfrom numba import njit\n\n@njit\ndef flat_diagonal_stops(height, width):\n '''Return a sequence of breakpoints separating a sequence \n of length height*width into a sequence of matrix diagonals\n of the shape (height, width)\n '''\n min_dim = min(height, width)\n lengths = np.empty(height + width, dtype='int')\n lengths[:min_dim] = [*range(min_dim)] # diagonal lengths in the lower triangle\n lengths[min_dim:1-min_dim] = min_dim # diagonal lengths in the main body\n lengths[:-min_dim:-1] = lengths[1:min_dim] # diagonal lengths in the upper triangle\n return lengths.cumsum()\n\n```\n\n### Step 2. Separate elements by index parity\n\n\nA sequence transformation like this:\n\n\n\n```\n(0, 1, 2, 3, 4, 5) >>> (0, 2, 4, 5, 3, 1)\n\n```\n\nis actually a separation of elements by the parity of their positional index. Elements with an even index are shifted to the left, while the others - to the right in reverse order:\n\n\n\n```\n@njit\ndef separate_by_index_parity(arr):\n '''Return a numpy.ndarray filled with elements of arr, \n first those in even-numbered positions, \n then those in odd-numbered positions in reverse order\n '''\n out = np.empty_like(arr)\n middle = sum(divmod(len(out), 2))\n out[:middle] = arr[::2]\n out[:middle-len(out)-1:-1] = arr[1::2]\n return out\n\n```\n\n### Step 3. Assemble the fragments as diagonals of a new array\n\n\nTo do this, we can create a flat representation of the required output and work within it by slicing diagonal positions:\n\n\n\n```\n@njit\ndef assemble_diagonals_separated_by_parity(arr, height, width):\n '''Return a matrix of shape (height, width) with elements \n of the given sequence arr arranged along diagonals, \n where the elements on each diagonal are separated \n by the parity of their index in them\n '''\n out = np.empty(height*width, dtype=arr.dtype)\n stops = flat_diagonal_stops(height, width)\n out_step = width + 1\n for offset, (start, stop) in enumerate(zip(stops[:-1], stops[1:]), 1-height):\n # out_from: the first element of an off-diagonal\n # out_to : next after the last element of an off-diagonal\n # out_step: a stride to get diagonal items\n out_from = -offset*width if offset < 0 else offset\n out_to = out_from + (stop-start)*out_step # stop - start is equal to the diagonal size\n out[out_from:out_to:out_step] = separate_by_index_parity(arr[start:stop])\n return out.reshape(height, width)\n\n```\n\nThe result is a stacking of the modified sequence on diagonals from bottom to top and from left to right. To get other types of stacking, we combine flipping and transposing. For example, we can stack elements in the left-to-right and top-to-bottom order along anti-diagonals as follows (note the reverse order of dimensions `(width, height)` in a function call):\n\n\n\n```\nheight, width = 6, 4\narr = np.arange(1, 1+height*width)\nout = np.fliplr(assemble_diagonals_separated_by_parity(arr, width, height).T)\n\nprint(out)\n\n```\n\n\n```\n[[ 1 2 4 7]\n [ 3 6 9 11]\n [ 5 10 13 15]\n [ 8 14 17 19]\n [12 18 21 22]\n [16 20 23 24]]\n\n```\n\n### Code for experiments\n\n\n\n```\nimport numpy as np\nfrom numba import njit\n\n@njit\ndef flat_diagonal_stops(height, width):\n min_dim = min(height, width)\n lengths = np.empty(height + width, dtype='int')\n lengths[:min_dim] = [*range(min_dim)]\n lengths[min_dim:1-min_dim] = min_dim\n lengths[:-min_dim:-1] = lengths[1:min_dim]\n return lengths.cumsum()\n\n@njit\ndef separate_by_index_parity(arr):\n out = np.empty_like(arr)\n middle = sum(divmod(len(out), 2))\n out[:middle] = arr[::2]\n out[:middle-len(out)-1:-1] = arr[1::2]\n return out\n\n@njit\ndef assemble_diagonals_separated_by_parity(arr, height, width):\n if height == 1 or width == 1: \n return arr.reshape(height, width).copy()\n out = np.empty(height*width, dtype=arr.dtype)\n stops = flat_diagonal_stops(height, width)\n out_step = width + 1\n for offset, (start, stop) in enumerate(zip(stops[:-1], stops[1:]), 1-height):\n out_from = -offset*width if offset < 0 else offset\n out_to = out_from + (stop-start)*out_step\n out[out_from:out_to:out_step] = separate_by_index_parity(arr[start:stop])\n return out.reshape(height, width)\n\nheight, width = 6, 4\narr = np.arange(1, 1+height*width)\nout = np.fliplr(assemble_diagonals_separated_by_parity(arr, width, height).T)\n\nprint(out)\n\n```\n\n\n\n---\n\n\n### P.S. Stack the data directly along anti-diagonals\n\n\nLet's specialize the assembly function to work directly with anti-diagonals, so as not to get confused with flip-transpose tricks. In this case, we have a shorter slicing step, and the starting point will be along the top and right edges. Everything else remains unchanged:\n\n\n\n```\n@njit\ndef assemble_antidiagonals_separated_by_parity(arr, height, width):\n if height == 1 or width == 1: \n return arr.reshape(height, width).copy()\n out = np.empty(height*width, dtype=arr.dtype)\n stops = flat_diagonal_stops(height, width)\n out_step = width - 1\n for offset, (start, stop) in enumerate(zip(stops[:-1], stops[1:])):\n out_from = offset if offset < width else (offset-width+2)*width-1\n out_to = out_from + (stop-start)*out_step\n out[out_from:out_to:out_step] = separate_by_index_parity(arr[start:stop])\n return out.reshape(height, width)\n\n```\n\n\n```\n>>> height, width = 8, 5\n>>> arr = np.arange(1, 1+height*width)\n>>> out = assemble_antidiagonals_separated_by_parity(arr, height, width)\n>>> print(out)\n[[ 1 2 4 7 11]\n [ 3 6 9 13 16]\n [ 5 10 15 18 21]\n [ 8 14 20 23 26]\n [12 19 25 28 31]\n [17 24 30 33 35]\n [22 29 34 37 38]\n [27 32 36 39 40]]"}
{"questionId":"62413e2f61455c9f2a1ef654584224d46d80e9871c858fe54a2e306b580a1063","question":"Using Generational ZGC for JavaFX application running with GraalVM-21 takes warning about not supporting JVMCI, why?\n- OS: MacOS 12.7.3 (intel)\n- JDK:\n\n\n\n> \n> \n> ```\n> openjdk version \"21.0.2\" 2024-01-16\n> OpenJDK Runtime Environment GraalVM CE 21.0.2+13.1 (build 21.0.2+13->jvmci-23.1-b30)\n> OpenJDK 64-Bit Server VM GraalVM CE 21.0.2+13.1 (build 21.0.2+13->jvmci-23.1-b30, mixed mode, sharing)\n> \n> ```\n> \n> \n\n\n- JavaFX version: 21.0.2\n- javafx-maven-plugin.version: 0.0.8\n- javafx.staticSdk.version: 21-ea+11.1\n\n\nI runned the JavaFX application by GraalVM 21 using javafx-maven-plugin.\nThis is JVM options I given to:\n\n\n\n```\n<plugin>\n <groupId>org.openjfx<\/groupId>\n <artifactId>javafx-maven-plugin<\/artifactId>\n <version>${javafx-maven-plugin.version}<\/version>\n <executions>\n <execution>\n <!-- Default configuration for running with: mvn clean javafx:run -->\n <id>default-cli<\/id>\n <configuration>\n <mainClass>${mainClass}<\/mainClass>\n <launcher>app<\/launcher>\n <jlinkZipName>app<\/jlinkZipName>\n <jlinkImageName>app<\/jlinkImageName>\n <noManPages>true<\/noManPages>\n <stripDebug>true<\/stripDebug>\n <noHeaderFiles>true<\/noHeaderFiles>\n <options>\n <option>-XX:+UseZGC<\/option>\n <option>-XX:+ZGenerational<\/option>\n <\/options>\n <\/configuration>\n <\/execution>\n <\/executions>\n<\/plugin>\n\n```\n\nAs you see, Generational ZGC is turned on, but brings a tip of waring:\n\n\n\n```\n[INFO] <<< javafx-maven-plugin:0.0.8:run (default-cli) < process-classes @ jfxdemo <<<\n[INFO] \n[INFO] \n[INFO] --- javafx-maven-plugin:0.0.8:run (default-cli) @ jfxdemo ---\n[0.004s][warning][gc,jvmci] Setting EnableJVMCI to false as selected GC does not support JVMCI: z gc\n\n```\n\nWhat does this mean? Is that meaning my application can't be optimized by graal?\n\n\nNothing did, just for question.","questionMetadata":{"type":"version","tag":"java","level":"intermediate"},"answer":"From these Graal issue tracker cases:\n\n\n1. [ZGC is supported with Graal on recent Java versions](https:\/\/github.com\/oracle\/graal\/pull\/6170) (JDK 17+).\n2. But [Generational ZGC is not yet supported](https:\/\/github.com\/oracle\/graal\/issues\/8117) (<= JDK 21).\n\n\n[Developer comment](https:\/\/github.com\/oracle\/graal\/issues\/2149#issuecomment-1880536245):\n\n\n\n> \n> As some of you have pointed out, ZGC has landed in GraalVM for JDK 17+ (see #[6170](https:\/\/github.com\/oracle\/graal\/pull\/6170)). We are tracking support for Generational ZGC in #[8117: [GR-45919] Add support for Generational ZGC on HotSpot](https:\/\/github.com\/oracle\/graal\/issues\/8117).\n> \n> \n>"}
{"questionId":"3ac3656cede2dabacee72832900daece8135cbb468f56f0e88fa9cad6e0c0246","question":"Get sum for each 5 minute time interval\n## Problem Description\n\n\nI have a table (`#tmstmp`) with 2 columns `dt` (`DATETIME`) and `payload` (`INT`). Eventually I want to sum `payload` for each 5 minute interval there is.\n\n\n## Code\n\n\n### Setup\n\n\n\n```\nDECLARE @start DATETIME = N'2024-1-1 12:00:00';\nDROP TABLE IF EXISTS #tmstmp\n , #numbers;\nCREATE TABLE #tmstmp (\n dt DATETIME PRIMARY KEY\n , payload INT NOT NULL\n);\n\nCREATE TABLE #numbers (\n n INT PRIMARY KEY\n);\nWITH numbers (n) AS (\n SELECT 0 AS n\n UNION ALL\n SELECT n + 1 AS n\n FROM numbers\n WHERE n < 100\n)\nINSERT\n INTO #numbers\nSELECT n\n FROM numbers;\n\nWITH rnd (mins, secs) AS (\n SELECT n2.n AS mins\n , CAST(ABS(CHECKSUM(NEWID())) % 60 AS INT) AS mins\n FROM #numbers AS n1\n , #numbers as n2\n WHERE n1.n < 5\n AND n2.n < 15\n), tmstmp (dt) AS (\n SELECT DATEADD(SECOND, secs, DATEADD(MINUTE, mins, @start)) AS dt\n FROM rnd\n) \nINSERT \n INTO #tmstmp\nSELECT DISTINCT dt\n , -1 AS payload\n FROM tmstmp\n ORDER BY dt;\n\nUPDATE #tmstmp\n SET payload = CAST(ABS(CHECKSUM(NEWID())) % 10 AS INT);\nGO\n\n```\n\n### Non overlapping timeslots are easy\n\n\n\n```\nDECLARE @start DATETIME = N'2024-1-1 12:00:00';\nDECLARE @slotDuration INT = 5;\n\nWITH agg (slot, sum_payload) AS (\n SELECT DATEDIFF(MINUTE, @start, dt) \/ @slotDuration AS slot\n , SUM(payload) AS sum_payload\n FROM #tmstmp\n GROUP BY DATEDIFF(MINUTE, @start, dt) \/ @slotDuration\n)\nSELECT DATEADD(MINUTE, slot * @slotDuration, @start) AS [from]\n , DATEADD(MINUTE, (slot + 1) * @slotDuration, @start) AS [to]\n , sum_payload\n FROM agg;\n\n```\n\n\n\n| from | to | sum\\_payload |\n| --- | --- | --- |\n| 2024-01-01 12:00:00 | 2024-01-01 12:05:00 | 124 |\n| 2024-01-01 12:05:00 | 2024-01-01 12:10:00 | 106 |\n| 2024-01-01 12:10:00 | 2024-01-01 12:15:00 | 95 |\n\n\n### Ultimate Goal: get running timeslots\n\n\nI want, however, to have an entry for **each** interval in the range, that is from `12:00-12:05`, `12:01-12:06`, `12:02-12:07` etc. until the last timeslot.\n\n\nI can construct the limits in the whole range before and use that in a `JOIN` like this:\n\n\n\n```\nDECLARE @start DATETIME = N'2024-1-1 12:00:00';\nDECLARE @slotDuration INT = 5;\nDECLARE @intervals INT = (SELECT DATEDIFF(MINUTE, @start, MAX(dt)) FROM #tmstmp);\n\nWITH ranges ([from], [to], slot) AS (\n SELECT DATEADD(MINUTE, n, @start) AS [from]\n , DATEADD(MINUTE, n + @slotDuration, @start) AS [to]\n , n AS slot\n FROM #numbers\n WHERE n <= @intervals\n), tm_mult (slot, [from], [to], dt, payload) AS (\n SELECT slot\n , [from]\n , [to]\n , dt\n , payload\n FROM #tmstmp\n INNER JOIN ranges\n ON [from] <= dt\n AND dt < [to]\n)\nSELECT MIN([from]) AS [from]\n , MAX([to]) AS [to]\n , SUM(payload) AS sum_payload\n FROM tm_mult\n GROUP BY slot\n ORDER BY slot;\n\n```\n\n\n\n| from | to | sum\\_payload |\n| --- | --- | --- |\n| 2024-01-01 12:00:00 | 2024-01-01 12:05:00 | 124 |\n| 2024-01-01 12:01:00 | 2024-01-01 12:06:00 | 120 |\n| 2024-01-01 12:02:00 | 2024-01-01 12:07:00 | 125 |\n| ... | ... | ... |\n| 2024-01-01 12:14:00 | 2024-01-01 12:19:00 | 19 |\n\n\nWhile this works in this toy example I have hundreds of thousands of timestamps in my real data and worst of all I little impact on the indices. My gut feeling tells me that I will create quite some duplication with my inequality `JOIN` and I was wondering whether this is anyways the canonical way of doing it or whether there is a more `SQL-onic` way of doing it? (like the `pythonistas` like to call certain code `pythonic` if it uses the language inherent concepts rather than trying to solve it with general tools).","questionMetadata":{"type":"implementation","tag":"sql","level":"intermediate"},"answer":"Window functions in sql ([WINDOW - microsoft.com](https:\/\/learn.microsoft.com\/en-us\/sql\/t-sql\/queries\/select-window-transact-sql?view=sql-server-ver16) \/ [OVER - microsoft.com](https:\/\/learn.microsoft.com\/en-us\/sql\/t-sql\/queries\/select-over-clause-transact-sql?view=sql-server-ver16)) are a great asset to add to your SQL toolbelt. Also particularly canonical; windows have been in since SQL Server 2005.\n\n\nBelow is an example:\n\n\n\n```\nSELECT\n [From],\n DATEADD(MINUTE, 1, [To]) [To],\n payload\nFROM (\n SELECT\n dt,\n MIN(dt) OVER(ORDER BY dt ROWS BETWEEN 4 PRECEDING AND CURRENT ROW) [From],\n dt [To],\n SUM(payload) OVER(ORDER BY dt ROWS BETWEEN 4 PRECEDING AND CURRENT ROW) payload\n FROM (\n SELECT\n DATEADD(MINUTE, DATEDIFF(MINUTE, 0, dt), 0) dt,\n SUM(payload) payload\n FROM #tmstmp\n GROUP BY DATEADD(MINUTE, DATEDIFF(MINUTE, 0, dt), 0)\n ) q\n) q\nWHERE DATEDIFF(MINUTE, [From], [To]) > 3\n\n```\n\nI'd like to draw attention to both the `4 PRECEDING` and `DATEADD(MINUTE, DATEDIFF(MINUTE, 0, dt), 0)`. As the later practically floors the datetime to the minute, `2024-01-01 12:04:00.000` is inclusive up to `2024-01-01 12:04:59.999`, but doesn't include `2024-01-01 12:05:00.000`. Hopefully that's the functionality you are looking for.\n\n\nHere is a [fiddle](https:\/\/dbfiddle.uk\/0aIGVBfs)"}
{"questionId":"39b354a823c661be4b473d2e7aad42a688fa6c31f34b76459118543afbc3e40a","question":"JavaFX SplitPane Divider hover color css\nI want to be able to assign a hover color via CSS to a SplitPane Divider in JavaFX.\n\n\nI am able to achive this by using the following CSS\n\n\n\n```\n.split-pane:horizontal > .split-pane-divider {\n -fx-background-color: transparent;\n}\n.split-pane:horizontal > .split-pane-divider:hover {\n -fx-background-color: lightblue;\n}\n\n```\n\nHowever: since the divider is lagging behind the cursor, it causes a flickering effect, since the hover is triggered whenever the divider catches up to the cursor position - but while dragging slowly, it's just a lot of flickering.\n\n\nI want the hover color to be applied throughout any dragging of the divider.\n\n\nIs there a way to achieve this via CSS? I tried `.split-pane-divider:focused` and `.split-pane-divider:selected`, but no luck :(\n\n\nIf not, is there any other way to achieve this?\n\n\nThanks!","questionMetadata":{"type":"implementation","tag":"java","level":"beginner"},"answer":"The solution of Sai Dandem works, but the solution due to the comment of James\\_D makes it quite a bit simpler. Thanks to both of you!\n\n\nFor a general split-pane:\n\n\n\n```\n.split-pane > .split-pane-divider {\n -fx-background-color: transparent;\n}\n.split-pane > .split-pane-divider:hover,\n.split-pane > .split-pane-divider:pressed {\n -fx-background-color: lightblue;\n}"}
{"questionId":"4bdb35d2c44e329ded90dd6fe084e056c6dcf1c041017ea5dddd0e39dc2aac7d","question":"Raku: Using hyper or race with junctions\nI have about 75000 files and I need to search each file for a set of key phrases stored in an array. I have Intel i9 capable of running 20 threads. I am trying to speed up the whole process by slurping each file into a string and matching each key phrase simultaneously. I wonder how I can use hyper\/race to expedite the process even more. Or do junctions automatically and concurrently distribute the tasks across the threads?\n\n\n\n```\n[1] > my $a = (1..10).join\n12345678910\n[3] > my @b = (3, \/5.\/, \/8\\d\/)\n[3 \/5.\/ \/8\\d\/]\n[4] > say $a.match( @b.all )\nall(3, 56, 89)\n\n[4] > say hyper $a.match( @b.all )\nNo such method 'hyper' for invocant of type 'Match'. # what to do?","questionMetadata":{"type":"optimization","tag":"raku","level":"intermediate"},"answer":"Perhaps [App::Rak](https:\/\/raku.land\/zef:lizmat\/App::Rak) can help you with this? Or perhaps its plumbing [rak](https:\/\/raku.land\/zef:lizmat\/rak)?\n\n\nThere's also an [introduction](https:\/\/dev.to\/lizmat\/its-time-to-rak-part-1-30ji).\n\n\nTo answer your question re \"Or do junctions automatically and concurrently distribute the tasks across the threads?\". The idea is that at one point they might, but that's not how they're currently implemented."}
{"questionId":"7111547f8a6dec2221e2d39533b064d3af9d7140cba9fdba550464e33015978a","question":"Removing one field from a struct in polars\nI want to remove one field from a struct, currently I set it up like this, but is there a simpler way to achieve this?\n\n\n\n```\nimport polars as pl\nimport polars.selectors as cs\n\ndef remove_one_field(df: pl.DataFrame) -> pl.DataFrame:\n meta_data_columns = (df.select('meta_data')\n .unnest('meta_data')\n .select(cs.all() - cs.by_name('system_data')).columns)\n print(meta_data_columns)\n return (df.unnest('meta_data')\n .select(cs.all() - cs.by_name('system_data'))\n .with_columns(meta_data=pl.struct(meta_data_columns))\n .drop(meta_data_columns))\n\n# Example usage\ninput_df = pl.DataFrame({\n \"id\": [1, 2],\n \"meta_data\": [{\"system_data\": \"to_remove\", \"user_data\": \"keep\"}, {\"user_data\": \"keep_\"}]\n})\noutput_df = remove_one_field(input_df)\nprint(output_df)\n\n```\n\n\n```\n['user_data']\nshape: (2, 2)\n\u250c\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 id \u2506 meta_data \u2502\n\u2502 --- \u2506 --- \u2502\n\u2502 i64 \u2506 struct[1] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 1 \u2506 {\"keep\"} \u2502\n\u2502 2 \u2506 {\"keep_\"} \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\nSomething like `select` on fields within a struct?","questionMetadata":{"type":"implementation","tag":"python","level":"beginner"},"answer":"You can use [`struct.field()`](https:\/\/docs.pola.rs\/api\/python\/stable\/reference\/expressions\/api\/polars.Expr.struct.field.html) which can accept either list of strings or multiple string arguments. You know your DataFrame' [`schema()`](https:\/\/docs.pola.rs\/api\/python\/stable\/reference\/dataframe\/api\/polars.DataFrame.schema.html) so you can easily create list of fields you want\n\n\n\n```\nfields = [c[0] for c in input_df.schema[\"meta_data\"] if c[0] != \"system_data\"]\n\ninput_df.with_columns(\n meta_data = pl.struct(\n pl.col.meta_data.struct.field(fields)\n )\n)\n\n\u250c\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 id \u2506 meta_data \u2502\n\u2502 --- \u2506 --- \u2502\n\u2502 i64 \u2506 struct[1] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 1 \u2506 {\"keep\"} \u2502\n\u2502 2 \u2506 {\"keep_\"} \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518"}
{"questionId":"15373a42a0e8c561b12f3597b7d2a806567c13e6ccfab35f4bc8e82225720852","question":"Optimization Challenge Due to L1 Cache with Numba\nI've been working on optimizing the calculation of differences between elements in NumPy arrays. I have been using Numba for performance improvements, but I get a 100-microsecond jump when the array size surpasses 1 MB. I assume this is due to my CPU's Ryzen 7950X 1 MB L1 cache size.\n\n\nHere is an example code:\n\n\n\n```\n@jit(nopython=True)\ndef extract_difference_1(random_array):\n shape0, shape1 = random_array.shape\n difference_arr = np.empty((shape0, shape1), dtype=np.float64)\n for i in range(shape0):\n difference_arr[i] = random_array[i,0] - random_array[i,1], random_array[i,1] - random_array[i,2], random_array[i,2] - random_array[i,3], random_array[i,3] - random_array[i,4], random_array[i,4] - random_array[i,5], random_array[i,5] - random_array[i,6], random_array[i,6] - random_array[i,0]\n\n return difference_arr\n\n@jit(nopython=True)\ndef extract_difference_2(random_array):\n shape0, shape1 = random_array.shape\n split_index = shape0 \/\/ 2\n part_1 = extract_difference_1(random_array[:split_index])\n part_2 = extract_difference_1(random_array[split_index:])\n\n return part_1 , part_2\n\nx_list = [18500, 18700, 18900]\ny = 7\nfor x in x_list:\n random_array = np.random.rand(x, y)\n print(f\"\\nFor (x,y) = ({x}, {y}), random_array size is {array_size_string(random_array)}:\\n\")\n for func in [extract_difference_1, extract_difference_2]:\n func(random_array) # compile the function\n timing_result = %timeit -q -o func(random_array)\n print(f\"{func.__name__}:\\t {timing_result_message(timing_result)}\")\n\n```\n\nThe timing results are:\n\n\n\n```\nFor (x,y) = (18500, 7), random_array size is 0.988 MB, 1011.72 KB:\n\nextract_difference_1: 32.4 \u00b5s \u00b1 832 ns, b: 31.5 \u00b5s, w: 34.3 \u00b5s, (l: 7, r: 10000),\nextract_difference_2: 33.8 \u00b5s \u00b1 279 ns, b: 33.5 \u00b5s, w: 34.3 \u00b5s, (l: 7, r: 10000),\n\nFor (x,y) = (18700, 7), random_array size is 0.999 MB, 1022.66 KB:\n\nextract_difference_1: 184 \u00b5s \u00b1 2.15 \u00b5s, b: 181 \u00b5s, w: 188 \u00b5s, (l: 7, r: 10000),\nextract_difference_2: 34.4 \u00b5s \u00b1 51.2 ns, b: 34.3 \u00b5s, w: 34.5 \u00b5s, (l: 7, r: 10000),\n\nFor (x,y) = (18900, 7), random_array size is 1.009 MB, 1033.59 KB:\n\nextract_difference_1: 201 \u00b5s \u00b1 3.3 \u00b5s, b: 196 \u00b5s, w: 205 \u00b5s, (l: 7, r: 10000),\nextract_difference_2: 34.5 \u00b5s \u00b1 75.2 ns, b: 34.4 \u00b5s, w: 34.6 \u00b5s, (l: 7, r: 10000),\n\n```\n\nSplitting the resulting difference\\_arr into two does it, but I prefer if the result is a single array. Especially as later, I will be increasing the y to 10, 50, 100, 1000 and x to 20000. When combining the split arrays part\\_1 and part\\_2 into the difference\\_arr, I found it slower than extract\\_difference\\_1. I think the slowdown is due to the extract\\_difference\\_1 being larger than 1 MB, resulting in L1 cache not being used.\n\n\nIs there a way to maintain the performance while having the result be a single array with Python, Numba or any other package? Or is there a way that will allow me to recombine these arrays without a performance penalty for the resulting array exceeding the L1 cache\u00a0size?","questionMetadata":{"type":"optimization","tag":"python","level":"intermediate"},"answer":"**TL;DR**: The performance issue is **not** caused by your CPU cache. It comes from the **behaviour of the allocator** on your target platform which is certainly *Windows*.\n\n\n\n\n---\n\n\n## Analysis\n\n\n\n> \n> I assume this is due to my CPU's Ryzen 7950X 1 MB L1 cache size.\n> \n> \n> \n\n\nFirst of all, the AMD Ryzen 7950X CPU is a Zen4 CPU. This architecture have L1D caches of 32 KiB not 1 MiB. That being said, the L2 cache is 1 MiB on this architecture.\n\n\nWhile the cache-size hypothesis is a tempting idea at first glance. There are two major issues with it:\n\n\nFirst, **the same amount of data is read and written by the two functions**. The fact that the array is split in two parts does not change this fact. Thus, if cache misses happens in the first function due to the L2 capacity, it should also be the case on the other function. Regarding memory accesses, the only major difference between the two function is the order of the access which should not have a significant performance impact anyway (since the array is sufficiently large so latency issues are mitigated).\n\n\nMoreover, the **L2 cache on Zen4 is not so much slower than the L3 one**. Indeed, It should not be more than twice slower while experimental results show a >5x times bigger execution time.\n\n\nI can reproduce this on a Cascade Lake CPU (with a L2 cache of also 1 MiB) on Windows. Here is the result:\n\n\n\n```\nFor (x,y) = (18500, 7), random_array size is 0.988006591796875:\n\nextract_difference_1: 68.6 \u00b5s \u00b1 3.63 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\nextract_difference_2: 70.8 \u00b5s \u00b1 5.2 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\nFor (x,y) = (18700, 7), random_array size is 0.998687744140625:\n\nextract_difference_1: 342 \u00b5s \u00b1 8.31 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\nextract_difference_2: 69.7 \u00b5s \u00b1 2.67 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\nFor (x,y) = (18900, 7), random_array size is 1.009368896484375:\n\nextract_difference_1: 386 \u00b5s \u00b1 7.34 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\nextract_difference_2: 67 \u00b5s \u00b1 4.51 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n```\n\n\n\n---\n\n\n## New hypothesis: allocation overheads\n\n\n\n> \n> Splitting the resulting difference\\_arr into two does it\n> \n> \n> \n\n\nThe main difference between the two functions is that one performs **2 small allocations rather than 1 big**. This rises a new hypothesis: can the allocation timings explain the issue?\n\n\nWe can easily answer this question based on this previous post: [Why is allocation using np.empty not O(1)](https:\/\/stackoverflow.com\/questions\/67189935\/why-is-allocation-using-np-empty-not-o1\/67194519#67194519). We can see that there is a big performance gap between allocations of 0.76 MiB (`np.empty(10**5)`) and the next bigger one >1 MiB. Here are the provided results of the target answer:\n\n\n\n```\nnp.empty(10**5) # 620 ns \u00b1 2.83 ns per loop (on 7 runs, 1000000 loops each)\nnp.empty(10**6) # 9.61 \u00b5s \u00b1 34.2 ns per loop (on 7 runs, 100000 loops each)\n\n```\n\nMore precisely, here is new benchmarks on my current machine:\n\n\n\n```\n%timeit -n 10_000 np.empty(1000*1024, np.uint8)\n793 ns \u00b1 18.8 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n%timeit -n 10_000 np.empty(1024*1024, np.uint8)\n6.6 \u00b5s \u00b1 173 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n```\n\nWe can see that the gap is close to 1 MiB. Note that the timings between 1000 KiB and 1024 are not very stable (showing that the result is dependent of hidden low-level parameters -- possibly packing\/alignment issues).\n\n\nThis Numpy allocation behaviour is AFAIK specific to Windows and AFAIR not visible on Linux (gaps might be seen but not that big and not at the same threshold).\n\n\nAn explanation is provided in the linked answer : expensive **kernel calls** are performed beyond a threshold (huge-pages might also play a role too).\n\n\n\n\n---\n\n\n## Solutions\n\n\n\n> \n> Is there a way to maintain the performance while having the result be a single array with Python\n> \n> \n> \n\n\nYes. You can **preallocate the output array memory** so not to pay the expensive allocation overhead. An alternative solution is to **use another allocator** (e.g. jemalloc, tcmalloc).\n\n\nHere is a modified code preallocating memory:\n\n\n\n```\[email protected](nopython=True)\ndef extract_difference_1(random_array, scratchMem):\n shape0, shape1 = random_array.shape\n difference_arr = scratchMem[:shape0*shape1].reshape((shape0, shape1))#np.empty((shape0, shape1), dtype=np.float64)\n for i in range(shape0):\n difference_arr[i] = random_array[i,0] - random_array[i,1], random_array[i,1] - random_array[i,2], random_array[i,2] - random_array[i,3], random_array[i,3] - random_array[i,4], random_array[i,4] - random_array[i,5], random_array[i,5] - random_array[i,6], random_array[i,6] - random_array[i,0]\n\n return difference_arr\n\[email protected](nopython=True)\ndef extract_difference_2(random_array, scratchMem):\n shape0, shape1 = random_array.shape\n split_index = shape0 \/\/ 2\n part_1 = extract_difference_1(random_array[:split_index], np.empty((split_index, shape1)))\n part_2 = extract_difference_1(random_array[split_index:], np.empty((split_index, shape1)))\n\n return part_1 , part_2\n\nx_list = [18500, 18700, 18900]\ny = 7\nscratchMem = np.empty(16*1024*1024)\nfor x in x_list:\n random_array = np.random.rand(x, y)\n print(f\"\\nFor (x,y) = ({x}, {y}), random_array size is {x*y*8\/1024\/1024}:\\n\")\n for func in [extract_difference_1, extract_difference_2]:\n func(random_array, scratchMem) # compile the function\n timing_result = %timeit -q -o func(random_array, scratchMem)\n print(f\"{func.__name__}:\\t {timing_result}\")\n\n```\n\nHere is the result:\n\n\n\n```\nFor (x,y) = (18500, 7), random_array size is 0.988006591796875:\n\nextract_difference_1: 65.1 \u00b5s \u00b1 2.48 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\nextract_difference_2: 71 \u00b5s \u00b1 2.36 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\nFor (x,y) = (18700, 7), random_array size is 0.998687744140625:\n\nextract_difference_1: 69.3 \u00b5s \u00b1 4.05 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\nextract_difference_2: 68.3 \u00b5s \u00b1 3.06 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\nFor (x,y) = (18900, 7), random_array size is 1.009368896484375:\n\nextract_difference_1: 68.5 \u00b5s \u00b1 1.98 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\nextract_difference_2: 68.7 \u00b5s \u00b1 3.14 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n```\n\nWe can see that the problem is now gone! Thus, this confirms the hypothesis that allocations were the main source of the performance issue."}
{"questionId":"8b9c643837f14276cdf54b29cab2aa06dc1bce9c703540a34d07111f1a0211bb","question":"AddressSanitizer:DEADLYSIGNAL from -fsanitize=address flag\nWhenever I run any c++ code, regardless of its contents, I sometimes randomly get the following error:\n\n\n\n```\nAddressSanitizer:DEADLYSIGNAL\nAddressSanitizer:DEADLYSIGNAL\nAddressSanitizer:DEADLYSIGNAL\nAddressSanitizer:DEADLYSIGNAL\nAddressSanitizer:DEADLYSIGNAL\nAddressSanitizer:DEADLYSIGNAL\nAddressSanitizer:DEADLYSIGNAL\nSegmentation fault (core dumped)\n\n```\n\nI am on Ubuntu 23.10 with kernel version: Linux 6.5.0-25-generic x86\\_64. Using the g++ 13.2.0 compiler. The code content really doesn't matter, a basic Hello World program causes the issue. I am compiling with the following flags: `g++ test.cpp -std=c++23 -fsanitize=address -o test`\n\n\nThe problem seems to come from using the following flag:\n\n\n\n```\n-fsanitize=address\n\n```\n\nI noticed that this only started happening when I start messing with a code with dynamic memory allocation, I also sometimes got memory leakage out of nowhere. I thought the problem will disappear when I wrote normal codes again, but that wasn't the case.\n\n\nWhen I tried running the following command:\n\n\n\n```\nulimit -s unlimited\n\n```\n\nAnd running the code again, I got a new error:\n\n\n\n```\n==18240==Shadow memory range interleaves with an existing memory mapping. ASan cannot proceed correctly. ABORTING.\n==18240==ASan shadow was supposed to be located in the \\[0x00007fff7000-0x10007fff7fff\\] range.\n==18240==This might be related to ELF_ET_DYN_BASE change in Linux 4.12.\n==18240==See https:\/\/github.com\/google\/sanitizers\/issues\/856 for possible workarounds.\n==18240==Process memory map follows:\n0x0f0051f00000-0x0f0052000000 \n0x0f0052100000-0x0f0052200000 \n0x0f0052300000-0x0f0052400000 \n0x0f0052500000-0x0f0052600000 \n0x0f0052700000-0x0f0052800000 \n0x0f0052872000-0x0f0052c00000 \n0x0f0052c00000-0x0f0052c26000 \/usr\/lib\/x86_64-linux-gnu\/libc.so.6\n0x0f0052c26000-0x0f0052da5000 \/usr\/lib\/x86_64-linux-gnu\/libc.so.6\n0x0f0052da5000-0x0f0052dfa000 \/usr\/lib\/x86_64-linux-gnu\/libc.so.6\n0x0f0052dfa000-0x0f0052dfe000 \/usr\/lib\/x86_64-linux-gnu\/libc.so.6\n0x0f0052dfe000-0x0f0052e00000 \/usr\/lib\/x86_64-linux-gnu\/libc.so.6\n0x0f0052e00000-0x0f0052e0d000 \n0x0f0053000000-0x0f005309c000 \/usr\/lib\/x86_64-linux-gnu\/libstdc++.so.6.0.32\n0x0f005309c000-0x0f00531cd000 \/usr\/lib\/x86_64-linux-gnu\/libstdc++.so.6.0.32\n0x0f00531cd000-0x0f005325a000 \/usr\/lib\/x86_64-linux-gnu\/libstdc++.so.6.0.32\n0x0f005325a000-0x0f0053265000 \/usr\/lib\/x86_64-linux-gnu\/libstdc++.so.6.0.32\n0x0f0053265000-0x0f0053268000 \/usr\/lib\/x86_64-linux-gnu\/libstdc++.so.6.0.32\n0x0f0053268000-0x0f005326c000 \n0x0f0053400000-0x0f0053425000 \/usr\/lib\/x86_64-linux-gnu\/libasan.so.8.0.0\n0x0f0053425000-0x0f0053534000 \/usr\/lib\/x86_64-linux-gnu\/libasan.so.8.0.0\n0x0f0053534000-0x0f0053569000 \/usr\/lib\/x86_64-linux-gnu\/libasan.so.8.0.0\n0x0f0053569000-0x0f005356d000 \/usr\/lib\/x86_64-linux-gnu\/libasan.so.8.0.0\n0x0f005356d000-0x0f0053570000 \/usr\/lib\/x86_64-linux-gnu\/libasan.so.8.0.0\n0x0f0053570000-0x0f0053aa4000 \n0x0f0053b57000-0x0f0053b6c000 \n0x0f0053b6c000-0x0f0053b6f000 \/usr\/lib\/x86_64-linux-gnu\/libgcc_s.so.1\n0x0f0053b6f000-0x0f0053b8a000 \/usr\/lib\/x86_64-linux-gnu\/libgcc_s.so.1\n0x0f0053b8a000-0x0f0053b8e000 \/usr\/lib\/x86_64-linux-gnu\/libgcc_s.so.1\n0x0f0053b8e000-0x0f0053b8f000 \/usr\/lib\/x86_64-linux-gnu\/libgcc_s.so.1\n0x0f0053b8f000-0x0f0053b90000 \/usr\/lib\/x86_64-linux-gnu\/libgcc_s.so.1\n0x0f0053b90000-0x0f0053ba0000 \/usr\/lib\/x86_64-linux-gnu\/libm.so.6\n0x0f0053ba0000-0x0f0053c20000 \/usr\/lib\/x86_64-linux-gnu\/libm.so.6\n0x0f0053c20000-0x0f0053c79000 \/usr\/lib\/x86_64-linux-gnu\/libm.so.6\n0x0f0053c79000-0x0f0053c7a000 \/usr\/lib\/x86_64-linux-gnu\/libm.so.6\n0x0f0053c7a000-0x0f0053c7b000 \/usr\/lib\/x86_64-linux-gnu\/libm.so.6\n0x0f0053c82000-0x0f0053c90000 \n0x0f0053c90000-0x0f0053c91000 \/usr\/lib\/x86_64-linux-gnu\/ld-linux-x86-64.so.2\n0x0f0053c91000-0x0f0053cbb000 \/usr\/lib\/x86_64-linux-gnu\/ld-linux-x86-64.so.2\n0x0f0053cbb000-0x0f0053cc5000 \/usr\/lib\/x86_64-linux-gnu\/ld-linux-x86-64.so.2\n0x0f0053cc5000-0x0f0053cc7000 \/usr\/lib\/x86_64-linux-gnu\/ld-linux-x86-64.so.2\n0x0f0053cc7000-0x0f0053cc9000 \/usr\/lib\/x86_64-linux-gnu\/ld-linux-x86-64.so.2\n0x5e446ea4f000-0x5e446ea50000 \/home\/x\/Desktop\/test\n0x5e446ea50000-0x5e446ea51000 \/home\/x\/Desktop\/test\n0x5e446ea51000-0x5e446ea52000 \/home\/x\/Desktop\/test\n0x5e446ea52000-0x5e446ea53000 \/home\/x\/Desktop\/test\n0x5e446ea53000-0x5e446ea54000 \/home\/x\/Desktop\/test\n0x7ffce680d000-0x7ffce682e000 \\[stack\\]\n0x7ffce68d0000-0x7ffce68d4000 \\[vvar\\]\n0x7ffce68d4000-0x7ffce68d6000 \\[vdso\\]\n0xffffffffff600000-0xffffffffff601000 \\[vsyscall\\]\n==18240==End of process memory map.\n\n```\n\nIs there someway to prevent this?","questionMetadata":{"type":"version","tag":"c++","level":"intermediate"},"answer":"@TheFortyTwo Thankyou for you post. I have the same experience. I am using sanitizer with gtest. Approximately every third time I execute the tests I get endless list AddressSanitizer:DEADLYSIGNAL. I have also narrowed it down to be independent of my code.\n\n\nSo, I was thinking it had to be `gtest` or the sanitizer itself. I am on Ubuntu 22.04.4 LTS, kernel 6.5.0-25-generic and gcc(Ubuntu 11.4.0~22.04) 11.4.0.\n\n\nFound this post that may be the solution: [Possible Bug in GCC Sanitizers?](https:\/\/stackoverflow.com\/questions\/77894856\/possible-bug-in-gcc-sanitizers)\n\n\nWill try to upgrade gcc-libs as suggested there."}
{"questionId":"e7e29a28c7b08563d52549671eb5668a5c00832f571819f3879a0bfb8e3fdbc2","question":"Efficient solution for the same-fringe problem for binary trees\nThe *fringe* of a binary tree is the sequence composed by its leaves, from\nleft to right. The ***same-fringe*** problem [Hewitt & Patterson, 1970]\nconsists of determining whether two binary trees have the same fringe.\nFor example, the first two trees below have the same fringe, but the\nlast two do not:\n\n\n\n```\n% . . .\n% \/ \\ \/ \\ \/ \\\n% . 3 1 . 1 .\n% \/ \\ \/ \\ \/ \\\n% 1 2 2 3 -2 3\n\nexample(1, fork(fork(leaf(1), leaf(2)), leaf(3))).\nexample(2, fork(leaf(1), fork(leaf(2), leaf(3)))).\nexample(3, fork(leaf(1), fork(leaf(-2), leaf(3)))).\n\n```\n\nA simple solution is to collect the leaves of one tree into a list and\nthen compare them with the leaves of the other tree.\n\n\n\n```\n\/*\n * SIMPLE SOLUTION\n *\/\n\nsf_1(T1, T2) :-\n walk(T1, [], Xs),\n walk(T2, [], Xs).\n\nwalk(leaf(X), A, [X|A]).\nwalk(fork(L, R), A0, Xs) :-\n walk(R, A0, A1),\n walk(L, A1, Xs).\n\n```\n\nAlthough simple, this solution is considered inelegant: first, because\nit can be impractical when the first tree is very large; and, second,\nbecause it is very inefficient when the trees differ in the first few\nleaves. Thus, a better solution would be to stop the comparison as soon\nas the first difference is found, without completely generating the list\ncontaining the fringe of the first tree.\n\n\n\n```\n\/*\n * SUPPOSEDLY BETTER SOLUTION\n *\/\n\nsf_2(T1, T2) :-\n step([T1], [T2]).\n\nstep([], []).\nstep([T1|S1], [T2|S2]) :-\n next(T1, S1, X, R1),\n next(T2, S2, X, R2),\n step(R1, R2).\n\nnext(leaf(X), S, X, S).\nnext(fork(L, R), S0, X, S) :-\n next(L, [R|S0], X, S).\n\n```\n\nTo compare the performance of these two solutions, I implemented some predicates to run automated experiments (by using SWI-prolog, version 9.0.4):\n\n\n\n```\n\/*\n * EMPIRICAL COMPARISON\n *\/\n\ncomp(Case) :-\n format('fsize sf-1 sf-2\\n'),\n forall( between(1, 10, I),\n ( N is 100000 * I,\n tree(1, N, A),\n ( Case = true % trees with same fringes\n -> tree(1, N, B)\n ; M is random(N\/\/10), % trees with different fringes\n flip(A, M, B) ),\n time(10, sf_1(A, B), T1),\n time(10, sf_2(A, B), T2),\n format('~0e ~2f ~2f\\n', [N, T1, T2]) ) ).\n\ntime(N, G, T) :-\n garbage_collect,\n S is cputime,\n forall(between(1, N, _), ignore(call(G))),\n T is (cputime - S) \/ N.\n\n\/*\n * RANDOM TREE GENERATION AND MODIFICATION\n *\/\n\ntree(X1, Xn, leaf(X1)) :-\n X1 = Xn,\n !.\ntree(X1, Xn, fork(L, R)) :-\n X1 < Xn,\n random(X1, Xn, Xi),\n Xj is Xi + 1,\n tree(X1, Xi, L),\n tree(Xj, Xn, R).\n\n\nflip(leaf(X), Y, leaf(Z)) :-\n ( X = Y\n -> Z is -X\n ; Z is X ).\nflip(fork(L0, R0), X, fork(L, R)) :-\n flip(L0, X, L),\n flip(R0, X, R).\n\n```\n\nThe empirical results show that the second solution is, in fact, **faster** than the first when the trees **do not have** the same fringes:\n\n\n\n```\n?- comp(false).\nfsize sf-1 sf-2\n1e+05 0.01 0.00\n2e+05 0.03 0.00\n3e+05 0.05 0.00\n4e+05 0.07 0.01\n5e+05 0.09 0.01\n6e+05 0.11 0.00\n7e+05 0.12 0.01\n8e+05 0.14 0.01\n9e+05 0.17 0.00\n1e+06 0.18 0.00\ntrue.\n\n```\n\nHowever, when the trees **do have** the same fringe, the second solution is a little **slower** than the first:\n\n\n\n```\n?- comp(true).\nfsize sf-1 sf-2\n1e+05 0.02 0.03\n2e+05 0.04 0.05\n3e+05 0.06 0.08\n4e+05 0.08 0.11\n5e+05 0.10 0.12\n6e+05 0.12 0.14\n7e+05 0.12 0.16\n8e+05 0.14 0.18\n9e+05 0.17 0.19\n1e+06 0.18 0.22\ntrue.\n\n```\n\n**QUESTION**: Is it possible to implement a solution (in *Prolog*) that is *faster* than the simple solution (by a constant factor, not necessarily *asymptotically faster*) when the fringes are distinct, yet is *not slower* when the fringes are the same? In other words, can we achieve the efficient comparison without the overhead? If so, how?","questionMetadata":{"type":"optimization","tag":"prolog","level":"advanced"},"answer":"Merge the two approaches into one. Always better than sf\\_2. Spacewise should be better than or equal to sf\\_1, because the first list is not generated. In SWI the `next\/3` goal needs to be unfolded to get always better or equal runtime.\n\n\n\n```\nsf_3(T1, T2) :-\n stepping(T1, [T2],[]).\n\nnext(X, [T|Ts0],Ts) :-\n t_next(T, X, Ts0,Ts).\n\nt_next(leaf(X), X, Ts,Ts).\nt_next(fork(L, R), X, Ts0,Ts) :-\n t_next(L, X, [R|Ts0],Ts).\n\nstepping(leaf(X), T2s0,T2s):-\n next(X, T2s0,T2s).\nstepping(fork(L, R), T2s0,T2s) :-\n stepping(L, T2s0,T2s1),\n stepping(R, T2s1,T2s)."}
{"questionId":"305524d3bf3de19340963f9f962a4b0ed1f9ebaba88d69d08d4ceb84b0a4a3fa","question":"AVX-512 BF16: load bf16 values directly instead of converting from fp32\nOn CPU's with AVX-512 and BF16 support, you can use the 512 bit vector registers to store 32 16 bit floats.\n\n\nI have found intrinsics to convert FP32 values to BF16 values (for example: \\_mm512\\_cvtne2ps\\_pbh), but I have not found any intrinsics to load BF16 values directly from memory. It seems a bit wasteful to always load the values in FP32 if I will then always convert them to BF16. Are direct BF16 loads not supported or have I just not found the right intrinsic yet?","questionMetadata":{"type":"debugging","tag":"assembly","level":"advanced"},"answer":"Strange oversight in the intrinsics. There isn't a special `vmov` instruction for BH16 in asm because you don't need one: you'd just use `vmovups` because asm doesn't care about types. (Except sometimes integer vs. FP domain, so probably prefer an FP load or store instruction - integer `vmovdqu16` might perhaps have an extra cycle of latency forwarding from load to FP ALU on some CPUs.)\n\n\nIf aligned load\/store works for your use-case, just point a `__m512bh*` at your data and deref it. (*[Is `reinterpret\\_cast`ing between hardware SIMD vector pointer and the corresponding type an undefined behavior?](https:\/\/stackoverflow.com\/questions\/52112605\/is-reinterpret-casting-between-hardware-simd-vector-pointer-and-the-correspond)* - it's well-defined as being equivalent to an aligned load or store intrinsic, and is allowed to alias any other data).\n\n\nIf not, then as @chtz points out, you can `memcpy` to\/from a `__m512bh` variable. Modern compilers know how to inline and optimize away small fixed-size memcpy, especially of the exact size of a variable. [@chtz's demo on Godbolt](https:\/\/godbolt.org\/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAMzwBtMA7AQwFtMQByARg9KtQYEAysib0QXACx8BBAKoBnTAAUAHpwAMvAFYTStJg1AB9U8lJL6yAngGVG6AMKpaAVxYMQAdlIOAMngMmABy7gBGmMQgAEzRpAAOqAqEtgzObh7eCUkpAgFBoSwRUbEWmFY2AkIETMQE6e6ePpaY1qnVtQT5IeGRMXEKNXUNmc1DXYE9RX2xAJQWqK7EyOwcAKTRAMyByG5YANRrm454LCyBBMSBAHQIR9hrGgCCG9sMu64HR46DV0a390eL2ephYAFYuNEwgh9rRUEx0K5jNDjGwWMh4gBPCAAN1QeHQACp9mjZkC1l4AEJA\/a0\/agiFQmHETAKI7U550kmYdFYiAbABsLIUpG5LFFyQAXphUFR9sKyZsOU8uSyCEsGPLWezyV4ACLk55AhmQ6Gw%2BHoZEIYyiQa4\/FEsVko1Uml0tUa\/aEiAmpmE2ZonUug1Go1bPBUBhYOXGACyQkcxgAatgAErG1GMs1whFIlG2gjGYWuWgEe0E4mk3XK1WYdXETU%2BzOmhCzelnRnGHOI4zxBQQKtK3Uhl5bBwRjjzWicMG8TwcLSkVCcRz7BSLZaYQ5bHikAiaSfzADWIE2XmuWzBkmiXC4AoFmw0Ap8044kjnB6XnF4ChAGj3B7zHAsAwFAoEQEgaAsPEdCROQlBQTB9BRMgwBcJscQ0KWkS\/hAYSfmEgS1JinC7oRzDEJiADyYTaK0%2B7cLwUFsIIVEMLQJELrwWBhK4wCOGItC\/oxpBYCwhjAOIXGiXgLJtDirKfpgqitK4BCrLuFzlJ%2BtB4GExDEc4WCfpcpykbwCnEGESSYHqPISbpRiAXwBjAAoSZ4JgADuVHxIw5kyIIIhiOwUiBfIShqJ%2BuhcPoEkgKYNr6Hpv6QPMqDxJUDDCQAtFRXD7Dl4nLHcmx6goR6YgYR6YDlTA4qojK8KgllXFgqUQPMLRtHYEAOCMnixf4kyFMUeiJMkWUDeNORZd0o19LF3VZR0wwuI0ejLe04zzb0URLeM00HZ0u3TPtXUbisEhTjOH7ScuHD7KoAAcAo5QKkj7MAyDIPs6EXvsEC4IQJDbpsXCzLwDFaLMx4gGC\/6vu%2BpAsPD\/7zouD0\/n%2BAFcUB4GQag0GwWQFAQIhJMgDiyDGDiXAAJzGJsz3GKoH18HQ6nELh%2BHSeRxEBfzlE0XR1gBcxjAEGxHGfjxfECbQQkBWJDmrIu%2BByTYCnCYuymqepAVaa%2Bi66fphkYGrUNXKjImWdZSh2eJRiOaAeMuUwbked5vn%2BSJ\/BBaI4hhf7EUqOo0m6JscVOYl5imx16WZakuVUdEhW1MgSBPEmAAajKFQA4jizWtQSilpWUFSpPYUZHb4UanWNsUTbkaTrZkzezakjeLZX9HbZ0ddbVUO0jXtm2He3g0WKPBTjxDCxLFdC%2BvrOpAY81nBPR9JIKNT%2Bx0\/T1zM4DwNEMQYMQ1DgHzAgmAIn0nWkCeCP6JwyMb1%2BHDY\/%2B0OHm\/HBoi8FRq\/T%2BWNcYw1IMBRAKAiZITgmTCmyEUC7AklwZ6XB\/xYS5jzAiRFKKC3wdRWi9FxZExYlLdinF1aYF4vxQSwldwq2dpbGSms8DayUipZAakNK8CNjpPSBlKJGVYaZG2u47Y2Udg5QIrtIFUFcu5TyPk\/Lzl3CHYKQdpAh0UGHaKMRo4mDMMlMICclxJwECnAqRUM6lXKpVaqtV6qNUhCXSIbVy5P2Hp4Pqtcp56GGnPM6M1JqpDri3OaY8QlLXKP3Eeg8AmxKrgkuoPd9oz0SRkaegwTrRKbhdJeoUbocDXmAreL03o71QUYP6z1rhcGuBoU%2B%2BBz6X0hhA\/%2Bd8H5RCfkjYBaN16fnAb%2BX%2BN9n4gAFNEa4XgPpgi8M9Z6mxNgCnpss2Kr5Nh3Uxt%2BTpsMAFAKGfdXZf99mWWSHYSQQA%3D) shows it optimizes the way we want with GCC and clang `-O1`, like with deref of a `__m512bh*` but working for unaligned.\n\n\nBut not so good with MSVC; it works correctly, but the memcpy to a local var actually reserves stack space and stores the value to it, as well as leaving it in ZMM0 as the return value. (Not reloading the copy, but not optimizing away the storage and the dead store to `res`.)\n\n\n\n\n---\n\n\nWith intrinsics, there isn't even a cast intrinsic from `__m512`, `__m512d`, or `__m512i`. (Or for any narrower vector width.)\n\n\nBut most compilers do also let you use a C-style cast on the vector type, like this to reinterpret (type-pun) the bits as a different vector type:\n\n\n\n```\n __m512bh vec = (__m512bh) _mm512_loadu_ps( ptr ); \/\/ Not supported by MSVC\n\n```\n\nThis is *not* a standard thing defined by [Intel's intrinsics guide](https:\/\/www.intel.com\/content\/www\/us\/en\/docs\/intrinsics-guide\/index.html#techs=MMX,SSE_ALL,AVX_ALL,AVX_512,Other&ig_expand=796,3668,307,5038,672&text=_mm512_castps_), but GCC and clang at least implement C-style casts (and C++ `std::bit_cast` and probably `static_cast`) the same way as the intrinsics API's functions like `_mm512_castsi512_ps` or `_mm512_castps_ph` (the FP16 intrinsic that we wish existed for BF16).\n\n\nThe AVX-512 load intrinsics take `void*`, making it clear that it's fine to use them on any type of data. So this just works with no casting of the pointer, just the vector data.\n\n\nThe 256-bit and 128-bit integer loads \/ stores take the respective `__m256i*` or `__m128i*` pointers, the FP loads take `float*`. But it's still strict-aliasing safe to do `_mm_loadu_ps( (float*)&int_vector[i] )`. Anyway, once you get a `__m256` or `__m128`, `(__m256bh) vec` will work in most compilers.\n\n\nMSVC chokes on this cast. You might get away with a C++20 `std::bit_cast<__m512h>( vec )` for MSVC if you're using C++. **But if you want to write portable C that compiles efficiently on MSVC as well as GCC\/Clang, your only option might be to deref an aligned pointer.** `memcpy` compiles to a dead store on MSVC, casting the value doesn't work, and deref of a vector pointer requires alignment on GCC\/Clang. MSVC always avoids alignment-checking versions of instructions, so if you're willing to `#ifdef`, it might be safe to deref an unaligned `__m512h*` on MSVC.\n\n\n(It's not safe to deref a `__m128*` without AVX because it could fold into a memory source operand like `addps xmm0, [rdi]` which does require alignment, but that's only for legacy-SSE things. VEX \/ EVEX encodings allow unaligned by default. A raw deref won't invent `vmovntps` stores that only come in alignment-required flavour; if a `vmovxxx` is required it'll use `vmovups` instead of `vmovaps` even if the pointer is known to be aligned. GCC and clang *will* use alignment-enforcing instructions when they can prove it's safe, unlike MSVC and classic ICC.)"}
{"questionId":"7e5fb1245c0fe088bd88ad212edf3060bd6298ea56aa9b79127413e0842e735e","question":"Pydantic non-default argument follows default argument\nI don't understand why the code:\n\n\n\n```\nfrom typing import Optional\nfrom pydantic import Field\nfrom pydantic.dataclasses import dataclass\n\n@dataclass\nclass Klass:\n field1: str = Field(min_length=1)\n field2: str = Field(min_length=1)\n field3: Optional[str]\n\n```\n\nthrows the error:\n\n\n`TypeError: non-default argument 'field3' follows default argument`\n\n\nif by default `Field` `default` kwarg is `PydanticUndefined`. Why are `field1` and `field2` default arguments?\n\n\nI'm using python 3.8 and pydantic 2.6\n\n\nI tried `field3: Optional[str] = Field(...)` and it works. I expected the code block above to work because all fields are required and none has default values.","questionMetadata":{"type":"version","tag":"python","level":"beginner"},"answer":"TL;DR\n\n\n- `field3` is a required argument with type `Optional[str]`, not an optional argument, because you didn't assign anything to `field3` in the class definition.\n- `field1` and `field2` are *technically* optional, because the `Field` object you assign to each provides a default of `PydanticUndefined`. That value, though,\ncauses a validation error at runtime if you don't supply another argument in its place.\n\n\n\n\n---\n\n\nThe `dataclass` decorator is constructing a `def` statement to define your class's `__init__` method that looks something like\n\n\n\n```\ndef __init__(self, field1=PydanticUndefined, field2=PydanticUndefined, field3):\n ...\n\n```\n\nThe constructed statement is then `exec`ed, which is why you get the error about a non-default argument when the class is defined, rather than when you try to instantiate the class.\n\n\nTo make `field3` optional, you have to provide a default value.\n\n\n\n```\nfield3: Optional[str] = None\n\n```\n\nThis makes the defined statement something like\n\n\n\n```\ndef __init__(self, field1=PydanticUndefined, field2=PydanticUndefined, field3=None):\n ...\n\n```\n\n\n\n---\n\n\nYou can't (as far as I know) make `field1` or `field2` truly required; the `PydanticUndefined` value just causes `__init__` to raise a `ValidationError` rather than a `TypeError` if no explicit\nargument is passed.\n\n\n\n```\n>>> Klass()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"\/Users\/chepner\/py311\/lib\/python3.11\/site-packages\/pydantic\/_internal\/_dataclasses.py\", line 134, in __init__\n s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)\npydantic_core._pydantic_core.ValidationError: 2 validation errors for Klass\nfield1\n Field required [type=missing, input_value=ArgsKwargs(()), input_type=ArgsKwargs]\n For further information visit https:\/\/errors.pydantic.dev\/2.5\/v\/missing\nfield2\n Field required [type=missing, input_value=ArgsKwargs(()), input_type=ArgsKwargs]\n For further information visit https:\/\/errors.pydantic.dev\/2.5\/v\/missing\n\n```\n\nI haven't dug into the source to see *exactly* how that happens, but I assume it's something resembling\n\n\n\n```\ndef __init__(self, field1=PydanticUndefined, ...):\n if field1 is PydanticUndefined:\n # prepare ValidationError exception\n if field2 is PydanticUndefined:\n # prepare ValidationError exception\n if <ValidationError needs to be raised>:\n raise ValidationError(...)\n\n```\n\nIf desired, you can provide \"real\" default values for `field1` and `field2` by adding the `default` keyword argument to `Field`."}
{"questionId":"1a90b700adefa888c73d7f5dfe5b7f3b2a44f0031d4681870a1e79874f1a8e96","question":"What is normal exit for a task group in Swift Concurrency\nIn WWDC session Explore structured concurrency in Swift. There is a part about the normal exit of task group.\n\n\n\n> \n> While task groups are a form of structured concurrency, there is a small difference in how the task tree rule is implemented\u00a0for group tasks versus async-let tasks. Suppose when iterating through the results of this group, I encounter a child task that completed with an error.\u00a0Because that error is thrown out of the group\u2019s block, all tasks in the group will then be implicitly canceled and then awaited. This works just like async-let.\u00a0\n> \n> \n> \n\n\n\n> \n> The difference comes when your group goes out of scope through a normal exit from the block. Then, cancellation is not implicit. This behavior makes it easier for you to express the fork-join pattern using a task group, because the jobs will only be awaited not canceled. You can also manually cancel all tasks before exiting the block using the group\u2019s cancelAll method. Keep in mind that no matter how you cancel a task, cancellation automatically propagates down the tree.\n> \n> \n> \n\n\nLet's use an example.\n\n\n\n```\nfunc fetchThumbnails(for ids: [String]) async throws -> [String: UIImage] {\n var thumbnails: [String: UIImage] = [:]\n try await withThrowingTaskGroup(of: (String, UIImage).self) { group in\n for id in ids {\n group.async {\n return (id, try await fetchOneThumbnail(withID: id))\n }\n }\n \/\/ Obtain results from the child tasks, sequentially, \n \/\/ in order of completion.\n for try await (id, thumbnail) in group {\n thumbnails[id] = thumbnail\n }\n }\n return thumbnails\n}\n\n```\n\nWhat is the normal exit in this example? When it reaches `return` the task group finished all the tasks, there is nothing to cancel. Please help me understand this.","questionMetadata":{"type":"conceptual","tag":"swift","level":"intermediate"},"answer":"They are just saying that with `async let`, if you neglect to `await` that task before it falls out of scope, it will be \u201cimplicitly canceled\u201d. But with task group, if you don\u2019t explicitly `await` the individual group tasks, those will *not* be implicitly canceled.\n\n\nWhen the video references \u201cnormal exit\u201d, they are just talking about the \u201chappy path\u201d where no errors occurred, where nothing was explicitly canceled, and execution just completed as normal, without error. They talk about the explicit cancelation and error handling elsewhere in that video; they just wanted to bring our attention to the subtle difference between how `async let` behaves if it falls out of scope without being awaited, and the corresponding task group behavior.\n\n\n\n\n---\n\n\nConsider this example in [SE-0313](https:\/\/github.com\/apple\/swift-evolution\/blob\/main\/proposals\/0317-async-let.md#implicit-async-let-awaiting):\n\n\n\n```\nfunc go() async { \n async let f = fast() \/\/ 300ms\n async let s = slow() \/\/ 3seconds\n print(\"nevermind...\")\n \/\/ implicitly: cancels f\n \/\/ implicitly: cancels s\n \/\/ implicitly: await f\n \/\/ implicitly: await s\n}\n\n```\n\nThat is admittedly a contrived example, not something that you would likely ever do in practice. A more realistic example might be some code in which, after creating the tasks with `async let`, we might have some logic that employs an early exit, resulting it never reaching the `await` of one or more of the `async let` tasks. In that scenario, with `async let`, any tasks not explicitly awaited will be implicitly canceled.\n\n\nHaving outlined what \u201cimplicit cancel\u201d means, let us now consider your example:\n\n\n\n```\nfunc fetchThumbnails(for ids: [String]) async throws -> [String: UIImage] {\n try await withThrowingTaskGroup(of: (String, UIImage).self) { group in\n for id in ids {\n group.addTask { [self] in\n try await (id, fetchOneThumbnail(withID: id))\n }\n }\n \n \/\/ Obtain results from the child tasks, sequentially,\n \/\/ in order of completion.\n\n var thumbnails: [String: UIImage] = [:]\n\n for try await (id, thumbnail) in group {\n thumbnails[id] = thumbnail\n }\n\n return thumbnails\n }\n}\n\n```\n\n(I replaced `group.async` with `group.addTask` and made a few cosmetic changes, but this is effectively the same as yours.)\n\n\nThis is really not applicable to the \u201cimplicit cancel\u201d discussion because this has a `for` loop that has an `await` for each task in the group (as you accumulate the results in a dictionary). So the whole idea of \u201cimplicit cancel\u201d does not apply because all of the child tasks are explicitly awaited.\n\n\nInstead, let us consider a variation on the theme, perhaps one where `fetchOneThumbnail` did not actually return anything, but just updated some internal cache. Then it would might be:\n\n\n\n```\nfunc fetchThumbnails(for ids: [String]) async {\n await withTaskGroup(of: Void.self) { group in\n for id in ids {\n group.addTask { [self] in\n await fetchOneThumbnail(withID: id)\n }\n }\n\n \/\/ NB: no explicit `for await` loop of `group` sequence is needed; unlike \n \/\/ `async let` pattern, these tasks will *not* be implicitly canceled\n }\n}\n\n```\n\nBut in this example, even though we never `for await` the `group` sequence at all (thus, none of the child tasks are explicitly awaited, in contrast to the prior example), this will *not* implicitly cancel the tasks in the group, unlike `async let`. It will just automatically `await` all those tasks for us.\n\n\nIn short, `async let` will implicitly cancel anything not explicitly awaited, while task group will *not* implicitly cancel anything (in the \u201cnormal exit\u201d scenario, at least), but rather will implicitly `await` the child tasks."}
{"questionId":"2a3217e4c36504ee3ad6266cd460756fec3d1dd3f7163a25d9cc8a207b9ffd5c","question":"Iterate over list of models and compare model fit using AIC, BIC\nI have dataset with multiple outcome variables that I would like to test against one predictor. on exploratory analysis I noted some of the relationships are polynomial to the degree 2 rather than linear. I would like to look at the BIC and AIC to make my decision of which is the best model to run.\n\n\nI have `lapply` function where I can iterate over multiple outcome variables but now I would like to add a second model and compare their fit. However when I run this function, it only saves the second model and I dont know how to get to 'outputs' to run through the next function. Can I have this within one function or do I need two?\n\n\nHere is a example from the iris dataset\n\n\n\n```\ndata(iris)\nvars <- names(iris[2:4])\n\nmodels2 <- lapply(vars, function(x) {\n model_list=list(\n mod1=lm(substitute(i ~ Sepal.Length, list(i=as.name(x))), data=iris),\n mod2=lm(substitute(i ~ poly(Sepal.Length,2), list(i=as.name(x))), data=iris))\n})\n\ny <- lapply(models2, summary) #This only saves results from mod2\n\n```\n\nHow do I then compare `mod1` to `mod2` fit and extract the following variable's?\n\n\n\n```\ndata.frame(\n do.call(merge, list(BIC(mod1, mod2), AIC(mod1, mod2))), \n logLik=sapply(list(mod1, mod2), logLik), \n anova(mod1, mod2, test='Chisq'))","questionMetadata":{"type":"implementation","tag":"r","level":"intermediate"},"answer":"First, make the `lm` nicer using `do.call` and `reformulate`. Then `lapply` over the models like this:\n\n\n\n```\n> models2 <- lapply(setNames(vars, vars), function(x) {\n+ list(\n+ mod1=do.call('lm', list(reformulate('Sepal.Length', x), quote(iris))),\n+ mod2=do.call('lm', list(reformulate('poly(Sepal.Length, 2)', x), quote(iris)))\n+ )\n+ })\n> \n> (res <- lapply(models2, \\(x) data.frame(\n+ with(x, do.call('merge', list(BIC(mod1, mod2), AIC(mod1, mod2)))),\n+ logLik=with(x, sapply(list(mod1, mod2), logLik)),\n+ with(x, anova(mod1, mod2))\n+ )))\n$Sepal.Width\n df BIC AIC logLik Res.Df RSS Df Sum.of.Sq F Pr..F.\n1 3 188.4963 179.4644 -86.73221 148 27.91566 NA NA NA NA\n2 4 189.8107 177.7682 -84.88410 147 27.23618 1 0.6794752 3.667285 0.05743267\n\n$Petal.Length\n df BIC AIC logLik Res.Df RSS Df Sum.of.Sq F Pr..F.\n1 3 396.1669 387.1350 -190.5675 148 111.4592 NA NA NA NA\n2 4 389.8649 377.8223 -184.9112 147 103.3623 1 8.096848 11.51519 0.000887343\n\n$Petal.Width\n df BIC AIC logLik Res.Df RSS Df Sum.of.Sq F Pr..F.\n1 3 192.4030 183.3711 -88.68553 148 28.65225 NA NA NA NA\n2 4 182.3757 170.3331 -81.16656 147 25.91907 1 2.733179 15.50122 0.000126881\n\n```\n\nYou could additionally `rbind`.\n\n\n\n```\n> do.call('rbind', res)\n df BIC AIC logLik Res.Df RSS Df Sum.of.Sq F Pr..F.\nSepal.Width.1 3 188.4963 179.4644 -86.73221 148 27.91566 NA NA NA NA\nSepal.Width.2 4 189.8107 177.7682 -84.88410 147 27.23618 1 0.6794752 3.667285 0.057432671\nPetal.Length.1 3 396.1669 387.1350 -190.56750 148 111.45916 NA NA NA NA\nPetal.Length.2 4 389.8649 377.8223 -184.91116 147 103.36231 1 8.0968484 11.515191 0.000887343\nPetal.Width.1 3 192.4030 183.3711 -88.68553 148 28.65225 NA NA NA NA\nPetal.Width.2 4 182.3757 170.3331 -81.16656 147 25.91907 1 2.7331790 15.501223 0.000126881\n\n```\n\n\n\n---\n\n\n*Data:*\n\n\n\n```\n> data(iris)\n> vars <- names(iris[2:4])"}
{"questionId":"41131e772770bf92a875c1c6b9fb90f30bf6592537cbc51fce42a2822f57bbf1","question":"How to specify column data type\nI have the following code:\n\n\n\n```\nimport polars as pl\nfrom typing import NamedTuple\n\n\nclass Event(NamedTuple):\n name: str\n description: str\n\n\ndef event_table(num) -> list[Event]:\n events = []\n for i in range(num):\n events.append(Event(\"name\", \"description\"))\n return events\n\n\ndata = {\"events\": [1, 2]}\ndf = pl.DataFrame(data).select(events=pl.col(\"events\").map_elements(event_table))\n\n\"\"\"\nshape: (2, 1)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 events \u2502\n\u2502 --- \u2502\n\u2502 list[struct[2]] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 [{\"name\",\"description\"}] \u2502\n\u2502 [{\"name\",\"description\"}, {\"name\"\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\"\"\"\n\n```\n\nBut if the first list is empty, I get a `list[list[str]]` instead of the `list[struct[2]]` that I need:\n\n\n\n```\ndata = {\"events\": [0, 1, 2]}\ndf = pl.DataFrame(data).select(events=pl.col(\"events\").map_elements(event_table))\nprint(df)\n\n\"\"\"\nshape: (3, 1)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 events \u2502\n\u2502 --- \u2502\n\u2502 list[list[str]] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 [] \u2502\n\u2502 [[\"name\", \"description\"]] \u2502\n\u2502 [[\"name\", \"description\"], [\"name\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\"\"\"\n\n```\n\nI tried using the `return_dtype` of the `map_elements` function like:\n\n\n\n```\ndata = {\"events\": [0, 1, 2]}\ndf = pl.DataFrame(data).select(\n events=pl.col(\"events\").map_elements(\n event_table,\n return_dtype=pl.List(pl.Struct({\"name\": pl.String, \"description\": pl.String})),\n )\n)\n\n```\n\nbut this failed with:\n\n\n\n```\nTraceback (most recent call last):\n File \"script.py\", line 18, in <module>\n df = pl.DataFrame(data).select(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \".venv\/lib\/python3.11\/site-packages\/polars\/dataframe\/frame.py\", line 8193, in select\n return self.lazy().select(*exprs, **named_exprs).collect(_eager=True)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \".venv\/lib\/python3.11\/site-packages\/polars\/lazyframe\/frame.py\", line 1943, in collect\n return wrap_df(ldf.collect())\n ^^^^^^^^^^^^^\npolars.exceptions.SchemaError: expected output type 'List(Struct([Field { name: \"name\", dtype: String }, Field { name: \"description\", dtype: String }]))', got 'List(List(String))'; set `return_dtype` to the proper datatype\n\n```\n\nHow can I get this to work? i need the type of this column to be `list[struct[2]]` event if the first list is empty.","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"## Quick fix right now\n\n\nHere's a `map_batches` implementation that should be at least marginally faster.\n\n\n\n```\ndef event_table(col: pl.Series) -> pl.Series:\n return pl.Series(\n [\n [\n Event(\"name\", \"description\")._asdict() #note ._asdict()\n for _ in range(num)\n ]\n for num in col\n ]\n )\n\n```\n\nIt uses nested list comprehensions which ought to be a bit faster than appending to a list in an explicit for loop but that is a python optimization not polars.\n\n\n\n```\npl.DataFrame(data).select(events=pl.col(\"events\").map_batches(event_table))\nshape: (3, 1)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 events \u2502\n\u2502 --- \u2502\n\u2502 list[struct[2]] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 [] \u2502\n\u2502 [{\"name\",\"description\"}] \u2502\n\u2502 [{\"name\",\"description\"}, {\"name\"\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\nYou actually just need to use `_asdict()` rather than relying on polars to infer what a NamedTuple ought to be.\n\n\n## Medium to Long term fix\n\n\nThe issue is [here](https:\/\/github.com\/pola-rs\/polars\/blob\/c8640cb928eb14d164bc0a158398b995641d4351\/py-polars\/src\/conversion\/any_value.rs#L363-L364) specifically that in certain paths, it treats tuples and lists the same and since a NamedTuple is a tuple, that's why it gets returned as a list.\n\n\n[This PR](https:\/\/github.com\/pola-rs\/polars\/pull\/15433) makes it check for the `_asdict` method and shifts to treating it as a dict\/struct.\n\n\nWith this PR you can do\n\n\n\n```\nclass Event(NamedTuple):\n name: str\n description: str\n\ndef event_table(num: int) -> list[Event]:\n return [Event(\"name\", \"desc\") for _ in range(num)]\n\ndata = {\"events\": [0, 1, 2]}\npl.DataFrame(data).select(\n events=pl.col(\"events\").map_elements(\n event_table,\n return_dtype=pl.List(\n pl.Struct({\"name\": pl.String, \"description\": pl.String})\n ),\n )\n)\nshape: (3, 1)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 events \u2502\n\u2502 --- \u2502\n\u2502 list[struct[2]] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 [] \u2502\n\u2502 [{\"name\",\"desc\"}] \u2502\n\u2502 [{\"name\",\"desc\"}, {\"name\",\"desc\"\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518"}
{"questionId":"b7cfe138146671f0027d82c4fa76e1442e09b136a59e9dea9f0f7c655d355782","question":"How to set the custom Microsoft visual studio projects MSI to open it without admin power\nI want to know if there some options to set the MSI file without admin power. When I open the msi file in non-admin user, it always need admin power. How can I set it to open the msi file without the admin power?\n\n\nBased on my search, I find the remote desktop has the behavior, but I could not find the settings in Microsoft Visual Studio Installer Projects.\n\n\nI want to use MSI installer for non-admin user.","questionMetadata":{"type":"implementation","tag":"other","level":"intermediate"},"answer":"According to my latest test, you only need to set the AlwaysInstallElevated value under the following two registry keys to \"1\" as the current user, and you can perform MSI installation without administrator privileges.\n\n\nHKEY\\_CURRENT\\_USER\\Software\\Policies\\Microsoft\\Windows\\Installer\n\n\nHKEY\\_LOCAL\\_MACHINE\\Software\\Policies\\Microsoft\\Windows\\Installer\n\n\nYou can read this document:\n<https:\/\/learn.microsoft.com\/en-us\/windows\/win32\/msi\/alwaysinstallelevated>"}
{"questionId":"49b24a99af10f65e1cf6341fd2344e98a531eb23eb9a25a5aee53e3c9a1b832b","question":"Python object creation behavior\nComing from Java, I have trouble understanding how objects are allocated in Python.\nConsider this Python script:\n\n\n\n```\nx = (\"a\", \"b\")\ny = (\"a\", \"b\")\nprint(x is y) # True\n\nx = [\"a\", \"b\"]\ny = [\"a\", \"b\"]\nprint(x is y) # False\n\n```\n\nIn languages similar to Java, new keyword assures that a new instance with another memory location is created.\nBut I'm assuming that's not the case here. So how can this behavior be explained? Is there any kind of pool for immutable types in Python to prevent duplicate literals?\n\n\nI guess that Python keeps track of immutable types since the same case returns True for strings, complex numbers and other immutable objects. But if so, why bother?","questionMetadata":{"type":"conceptual","tag":"python","level":"intermediate"},"answer":"When determining whether two objects are equal, there are two methods:\n\n\n1. The `is` keyword determines whether two objects have the same memory id. In other words, `a is b` means that both `a` and `b` are stored in the same memory location with the same `id`. Basically `a is b` is equal to the expression `id(a) == id(b)`.\n2. The `==` expression part is a *comparison operator*. It checks if two objects have the same value, but not necessarily stored in the same memory location.\n\n\nWhen creating objects, Python gets to decide if the new object is to *share* the memory location (id) of an existing object. For example,\n\n\n\n```\na = 1\nb = 1\nprint(a == b) # always True\nprint(a is b) # most likely True\n\n```\n\nThe results are stated in the comments. `a is b` in this case are probably `True`, while `a == b` is always `True`. But in this code,\n\n\n\n```\na = [{'a':1, 'b':2, 'c':3}, ([1, 2, 3], 7, 8, 9), 'foo', ('bar', 'eggs')]\nb = [{'a':1, 'b':2, 'c':3}, ([1, 2, 3], 7, 8, 9), 'foo', ('bar', 'eggs')]\nprint(a == b) # always True\nprint(a is b) # most likely False\n\n```\n\n`a is b` is probably `False` while `a == b` is still always `True`. This difference is because of how Python determines what storage is most efficient. In the first code, the two variables `a` and `b` are both assigned to a number `1`. Because the number `1` is used too frequently, and it is an immutable object (remember `int`s are immutable objects), Python most likely will decide to store the object in the same memory location as the other `1`s in this or other programs. However,\n\n\n\n```\n[{'a':1, 'b':2, 'c':3}, ([1, 2, 3], 7, 8, 9), 'foo', ('bar', 'eggs')]\n\n```\n\nis *not* too common, and the object is mutable (`list`s are mutable). Thus, in this case, Python will store the object in another memory location, so `is` might not return `True`, although both objects have the same value. An extremely important thing to note here is that ***all** mutable objects that are created independently without any reference to other objects will **always** have a different memory location, regardless of the actual value*. Note that `tuple`s are a special case: they are both collections but also immutable, like strings. The following three codes will demonstrate the concept clearly:\n\n\n\n```\n# Program 1: Common and uncommon Immutable types\n# Part I: Common\n# define variables\na = 1\nb = 1\nc = 1\n# check values\nprint(a == b) # True\nprint(a == c) # True\nprint(b == c) # True\n# preform operation\nprint(a+b+c == a+b+c) # True\n# use `is` to check values\nprint(a is b) # True\nprint(a is c) # True\nprint(b is c) # True\n# You might expect `False` here, but keep in mind that `3` is another common integer which is immutable\nprint(a+b+c is a+b+c) # True\n\n# Part II: Uncommon\n# define variables\na2 = 123457997542\nb2 = 123457997542\nc2 = 123457997542\n# check values\nprint(a2 == b2) # True\nprint(a2 == c2) # True\nprint(b2 == c2) # True\n# preform operation\nprint(a2+b2+c2 == a2+b2+c2) # True\n# use `is` to check values\nprint(a2 is b2) # True\nprint(a2 is c2) # True\nprint(b2 is c2) # True\nprint(a2+b2+c2 is a2+b2+c2) # False, because 740747985252 is not a common enough number that exists all the time\n\n# Part III: tuples\n\nprint('entering tuples part')\nt1 = (1, 2)\nt2 = (1, 2)\nt3 = (1, 2)\n\n# check values\nprint(t1 == t2) # True\nprint(t1 == t3) # True\nprint(t2 == t3) # True\n# preform operation\nprint(t1 + t2 + t3 == t1 + t2 + t3) # True\n\n# Use `is`\nprint(t1 is t2) # True\nprint(t1 is t3) # True\nprint(t2 is t3) # True\n# preform operation\nprint(t1 + t2 + t3 is t1 + t2 + t3) # False, as expected\n\n\n```\n\n\n```\n# Program 2: Common and uncommon mutable objects\n# Part I: common: lists\n# define variables\na = [1, 2]\nb = [1, 2]\nc = [1, 2]\n\n# check values\nprint(a == b) # True\nprint(a == c) # True\nprint(b == c) # True\n# preform operation\nprint(a + b + c == a + b + c) # True\n\n# Use `is`\nprint(a is b) # False\nprint(a is c) # False\nprint(b is c) # False\n# preform operation\nprint(a + b + c is a + b + c) # False, as expected\n\n# Part II: Uncommon\na2 = [{'a': 1, 'b': 2, 'c': 3}, ('1', '2', '3', '4'), [1, 2, 3, 4], ({'a': 1, 'b': 2}, (1, 2)), 'foo',\n 'bar'] # extremely weird and uncommon object; as expected, all `is` operation will be `False`\nb2 = [{'a': 1, 'b': 2, 'c': 3}, ('1', '2', '3', '4'), [1, 2, 3, 4], ({'a': 1, 'b': 2}, (1, 2)), 'foo', 'bar']\nc2 = [{'a': 1, 'b': 2, 'c': 3}, ('1', '2', '3', '4'), [1, 2, 3, 4], ({'a': 1, 'b': 2}, (1, 2)), 'foo', 'bar']\n# check values\nprint(a2 == b2) # True\nprint(a2 == c2) # True\nprint(b2 == c2) # True\n# preform operation\nprint(a2 + b2 + c2 == a2 + b2 + c2) # True\n\n# Use `is`\nprint(a2 is b2) # False\nprint(a2 is c2) # False\nprint(b2 is c2) # False\n# preform operation\nprint(a2 + b2 + c2 is a2 + b2 + c2) # False, as expected\n\n\n```\n\n\n```\n# Program 3: Reference pointer; sometimes obscure and hard to understand\n# Part I: Common immutable\n# Part 1: common integers\n# define variables\na = 1\nb = a\nc = b\n\n# use `is` to check values\nprint(a is b) # True\nprint(a is c) # True\nprint(b is c) # True\nprint(a+b+c is a+b+c) # True\n\n# Part 2: Uncommon integers\n# define variables\na2 = 123457997542\nb2 = a2\nc2 = b2\n\n# use `is` to check values\nprint(a2 is b2) # True\nprint(a2 is c2) # True\nprint(b2 is c2) # True\nprint(a2+b2+c2 is a2+b2+c2) # False, because 740747985252 is not a common enough number that exists all the time\n\ndel a, b, c, a2, b2, c2 # delete variables\n# ===============================================================================================================\n# Part II: Mutable and tuples\n# Part 1: common: lists\n# define variables\na = [1,2]\nb = a\nc = b\n\n# Use `is`\nprint(a is b) # True\nprint(a is c) # True\nprint(b is c) # True\n# preform operation\nprint(a+b+c is a+b+c) # False, as expected\n\ndel a, b, c\n# Part 2: common: tuples\n# define variables\nprint('entering tuples part')\na = (1, 2)\nb = a\nc = b\n\n# Use `is`\nprint(a is b) # True\nprint(a is c) # True\nprint(b is c) # True\n# preform operation\nprint(a+b+c is a+b+c) # False, as expected\n\n# Note: the results of tuples are the same as with lists.\n\nprint('uncommon')\n# Part III: Uncommon\na2 = [{'a':1, 'b':2, 'c':3}, ('1', '2', '3', '4'), [1, 2, 3, 4], ({'a':1, 'b':2}, (1, 2)), 'foo', 'bar'] # extremely weird and uncommon object; as expected, all `is` operation will be `False`\nb2 = a2\nc2 = b2\n\n# Use `is`\nprint(a2 is b2) # True\nprint(a2 is c2) # True\nprint(b2 is c2) # True\n# preform operation\nprint(a2+b2+c2 is a2+b2+c2) # False, as expected\n\n# Notice all `is` results are `True` here no matter common or uncommon except for the last operation one\n# ================================================================================================================\n# Part III: List elements\n# Part I: literal lists does not affect elements\nl = [1, 2, 3]\nl2 = l\nl3 = [1, 2, 3]\nprint(l is l2) # True, same memory location\nprint(l[0] is l2[0]) # True, how the elements of a list or other collection are stored are not affected, same as in an independant variable\nprint(l[0] is l3[0]) # Still True\ndel l, l2\n\n# Part II: elements cannot be changed by changing a copy of a element\nl = [1, 2, 3]\nl2 = l[0]\nprint(l, l2)\nl2 = 1\nprint(l, l2)\n\ndel l, l2\n# Part IV: Slice also cannot change the list\nl = [1, 2, 3]\nl2 = l[1:]\nprint(l, l2)\nl2 = 4\nprint(l, l2)\n\ndel l, l2\n\n# Part V: But, mutable objects within a list *can* be change by changing a copy of the object *IF* using append or other methods\nl = [[1, 2, 3], [4, 5, 6]]\nl2 = l[0]\nl3 = l[0]\nprint(l, l2)\nl2 = [7, 8, 9]\nprint(l, l2)\n\nl3.append(10)\nprint(l, l3) # l changes\n\n\n```\n\nAlthough the code is a bit long, I hope you can understand Python's memory model better.\n\n\n\n\n---\n\n\nNote for code comments: the \"Part\" followed by roman numerals are the big categories, and the \"Part\" followed by Arabic numerals are the sub categories under the nearest big category.\n\n\n\n\n---\n\n\nNote that in the second program the tuples have a different behavior than lists. The `is` check return `True`, because in Python 3 tuples with the same value are usually stored in the same memory address location. In the third program, the result are the same as lists.\n\n\n# Quick Tip:\n\n\nTo get the reference count of a certain object, use `sys.getrefcount()`:\n\n\n\n```\n>>> sys.getrefcount(1)\n1796\n>>> sys.getrefcount(9)\n139\n>>> sys.getrefcount([1,2])\n1\n>>> sys.getrefcount(2)\n1131\n>>> sys.getrefcount(200)\n26\n>>> sys.getrefcount(2002)\n2\n>>> sys.getrefcount([])\n1\n>>> sys.getrefcount({})\n1\n>>> sys.getrefcount(())\n25760\n>>> sys.getrefcount(True)\n2356\n>>> sys.getrefcount(False)\n2943\n>>> sys.getrefcount(list)\n86\n\n```\n\nNote that even empty lists (`[]`) and dictionaries (`{}`) does not have any reference to it. However, empty tuples (`()`) have a large amount of references. The objects `True` and `False` both have a large amount of references."}
{"questionId":"81686bf3f07363a0606608a8431f9eeb94a28b22c74e0150cc0a64e4bcd4643e","question":"How to make compiler warning about the enum type mismatch in switch and case?\nFor the following code, I hope there is a warning because `a` is type `enum A`, but the case `B1` and `B2` are type `enum B`; but I couldn't find a way to make gcc\/clang warn about it.\n\n\nAny suggestion on how to detect the potential bugs like this?\n\n\nThanks\n\n\n\n```\nenum A { A1, A2 };\n\nenum B { B1, B2 };\n\nint foo(enum A a) {\n switch(a) {\n case B1:\n return 1;\n case B2:\n return 2;\n default:\n return 3;\n }\n}\n\n\n```\n\n\n```\n$ clang -Wall -Wextra -Wpedantic -Werror -c enum3.c; echo $?\n0\n$ gcc -Wall -Wextra -Wpedantic -Werror -c enum3.c; echo $?\n0\n$ clang --version\nUbuntu clang version 14.0.0-1ubuntu1.1\nTarget: x86_64-pc-linux-gnu\nThread model: posix\nInstalledDir: \/usr\/bin\n$ gcc --version\ngcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nCopyright (C) 2021 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.","questionMetadata":{"type":"debugging","tag":"c","level":"intermediate"},"answer":"As others have noted, there does not appear to be any\nwarning built in to GCC or Clang to do this when compiling as C, although Kevin points out in a comment that [`-Wenum-compare-switch`](https:\/\/clang.llvm.org\/docs\/DiagnosticsReference.html#wenum-compare-switch), which is enabled by default, will warn about this in C++ mode. There is also no\n[`clang-tidy`](https:\/\/clang.llvm.org\/extra\/clang-tidy\/checks\/list.html)\ncheck for it.\n\n\nHowever, it is possible to write a check condition for this using\n[`clang-query`](https:\/\/firefox-source-docs.mozilla.org\/code-quality\/static-analysis\/writing-new\/clang-query.html).\nThe following check will report any `switch` whose condition expression\nhas enumeration type, and it has a `case` that has a case constant that\nhas a different enumeration type:\n\n\n\n```\n#!\/bin\/sh\n\nPATH=$HOME\/opt\/clang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04\/bin:$PATH\n\n# In this query, the comments are stripped by a 'grep' command further\n# below. (clang-query also ignores comments, but its mechanism is\n# a little buggy.)\nquery='m\n # Look for a \"switch\" statement\n switchStmt(\n # where its condition expression\n hasCondition(\n # after skipping past any implicit casts\n ignoringImpCasts(\n # is an expression\n expr(\n # whose type\n hasType(\n # after stripping typedefs, etc.,\n hasUnqualifiedDesugaredType(\n # is an enumeration\n enumType(\n # whose declaration\n hasDeclaration(\n # is an enumeration declaration. Bind that\n # declaration to the name \"switch-enum\".\n enumDecl().bind(\"switch-enum\")\n )\n )\n )\n )\n )\n )\n ),\n\n # Furthermore, having found a relevant \"switch\", examine each of its\n # \"case\" and \"default\" statements and report any that\n forEachSwitchCase(\n # is a \"case\" statement\n caseStmt(\n # whose constant expression\n hasCaseConstant(\n # has any descendant\n hasDescendant(\n # that is a reference to a declaration\n declRefExpr(\n # where that declaration\n hasDeclaration(\n # is an enumerator declaration\n enumConstantDecl(\n # whose parent\n hasParent(\n # is an enumeration declaration\n enumDecl(\n # unless\n unless(\n # that enumeration is the same one that we found\n # when matching the \"switch\" condition.\n equalsBoundNode(\"switch-enum\")\n )\n )\n )\n ).bind(\"case-enumerator\")\n )\n )\n )\n )\n ).bind(\"case-stmt\")\n )\n ).bind(\"switch\")\n'\n\n# Strip the comments.\nquery=$(echo \"$query\" | egrep -v '^ +#')\n\nif [ \"x$1\" = \"x\" ]; then\n echo \"usage: $0 filename.c -- <compile options like -I, etc.>\"\n exit 2\nfi\n\n# Run the query. Setting 'bind-root' to false means clang-query will\n# not also print a redundant \"root\" binding where I have bound \"switch\".\nclang-query \\\n -c=\"set bind-root false\" \\\n -c=\"$query\" \\\n \"$@\"\n\n# EOF\n\n```\n\nWhen run on this input:\n\n\n\n```\n\/\/ test.c\n\/\/ Testing mismatched switch(enum).\n\nenum A { A1, A2 };\n\nenum B { B1, B2 };\n\nint f(enum A a)\n{\n switch (a) {\n case B1: \/\/ reported\n return 1;\n case B2: \/\/ reported\n return 2;\n default:\n return 3;\n }\n}\n\nint f2(enum A a)\n{\n switch (a) {\n case 1: \/\/ not reported\n return 1;\n case 3: \/\/ not reported (but clang -Wswitch warns)\n return 3;\n case A1: \/\/ not reported: correct enumeration\n return B2; \/\/ not reported: not in the 'case' constant expr\n default:\n return B2; \/\/ not reported: not in a 'case'\n }\n}\n\ntypedef enum A AAlias;\n\nint f3(AAlias a)\n{\n \/\/ Make sure we do not get confused by a typedef.\n switch (a) {\n case B1: \/\/ reported\n return 1;\n default:\n return 0;\n }\n}\n\n\/\/ EOF\n\n```\n\nit produces this output:\n\n\n\n```\n$ .\/cmd.sh test.c --\n$(PWD)\/test.c:25:10: warning: case value not in enumerated type 'enum A' [-Wswitch]\n case 3: \/\/ not reported (but clang -Wswitch warns)\n ^\n\nMatch #1:\n\n$(PWD)\/test.c:6:14: note: \"case-enumerator\" binds here\nenum B { B1, B2 };\n ^~\n$(PWD)\/test.c:13:5: note: \"case-stmt\" binds here\n case B2: \/\/ reported\n ^~~~~~~~~~~~~~~~~~~~~~~~~~\n$(PWD)\/test.c:10:3: note: \"switch\" binds here\n switch (a) {\n ^~~~~~~~~~~~\n$(PWD)\/test.c:4:1: note: \"switch-enum\" binds here\nenum A { A1, A2 };\n^~~~~~~~~~~~~~~~~\n\nMatch #2:\n\n$(PWD)\/test.c:6:10: note: \"case-enumerator\" binds here\nenum B { B1, B2 };\n ^~\n$(PWD)\/test.c:11:5: note: \"case-stmt\" binds here\n case B1: \/\/ reported\n ^~~~~~~~~~~~~~~~~~~~~~~~~~\n$(PWD)\/test.c:10:3: note: \"switch\" binds here\n switch (a) {\n ^~~~~~~~~~~~\n$(PWD)\/test.c:4:1: note: \"switch-enum\" binds here\nenum A { A1, A2 };\n^~~~~~~~~~~~~~~~~\n\nMatch #3:\n\n$(PWD)\/test.c:6:10: note: \"case-enumerator\" binds here\nenum B { B1, B2 };\n ^~\n$(PWD)\/test.c:40:5: note: \"case-stmt\" binds here\n case B1: \/\/ reported\n ^~~~~~~~~~~~~~~~~~~~~~~~~~\n$(PWD)\/test.c:39:3: note: \"switch\" binds here\n switch (a) {\n ^~~~~~~~~~~~\n$(PWD)\/test.c:4:1: note: \"switch-enum\" binds here\nenum A { A1, A2 };\n^~~~~~~~~~~~~~~~~\n3 matches.\n\n```\n\nFor more details on what the elements of the query do, see the Clang\n[AST Matcher Reference](https:\/\/clang.llvm.org\/docs\/LibASTMatchersReference.html).\nIt's pretty terse though, so trial and error is required to make use of\nit.\n\n\nFWIW, I ran this over a couple large-ish C++ translation units I had\nat hand and it didn't report anything. So while I haven't really done\nany \"tuning\" of the query it appears to not explode with noise.\n\n\nOf course, adding a custom `clang-query` command to your build is much\nmore work than just adding a compiler warning option, but it's perhaps\nsomething to experiment with at least."}
{"questionId":"cabf93e4e5831cc6a908b32eaff957e8a60c814fb5e4f3d4ef4157fc918fe225","question":"Convert latitude and longitude string vector into data frame\nI am struggling to parse the location strings I have in my data.\n\n\nThe location is inconveniently set up as a string with both the latitude and longitude info bundled together and I want to extract that info into a separate variable for each (and for each observation).\n\n\nThe data I'm trying to parse looks like this:\n\n\n\n```\nID <- c(1, 2, 3)\nlocation_1 <- c(\"lat:10.1234567,lng:-70.1234567\", \"lat:20.1234567891234,lng:-80.1234567891234\", \"lat:30.1234567,lng:-90.1234567\")\n\ndf <- data.frame(ID, location_1)\n\nID location_1\n1 lat:10.1234567,lng:-70.1234567 \n2 lat:20.1234567891234,lng:-80.1234567891234\n3 lat:30.1234567,lng:-90.1234567\n\n```\n\nI'm trying to get them to look like this:\n\n\n\n```\nID latitude longitude\n1 10.1234567 -70.1234567\n2 20.1234567891234 -80.1234567891234\n3 30.12345 -90.12345\n\n```\n\nI've tried a few different solutions but I can't quite figure out the right phrasing to extract the coordinates.\n\n\nOne I tried was\n\n\n\n```\nf <- data.frame(Latitude = str_extract_all(dl$location_1, \"\\\\d+\\\\.\\\\d+\")[[1]], \n Longitude = str_extract_all(dl$location_1, \"(?<=,\\\\s)\\\\d+\\\\.\\\\d+(?=\\\\))\")[[1]])\n\n```\n\nanother was\n\n\n\n```\nstrcapture(\"\\\\(([-0-9.]+)\\\\s+([-0-9.]+)\", location_1, proto = list(lon = 1,lat = 1))\n\n```\n\nbut neither quite fit my original data so I keep getting NAs.","questionMetadata":{"type":"implementation","tag":"r","level":"beginner"},"answer":"I use `tidyr::separate_wider_delim` to separate your single column into two columns, breaking at the comma. Then, with `dplyr::across` we can apply `readr::parse_number` to parse the number out of the string for both columns:\n\n\n\n```\nlibrary(tidyr)\nlibrary(dplyr)\nlibrary(readr)\ndf |>\n separate_wider_delim(location_1, delim = \",\", names = c(\"lat\", \"lon\")) |>\n mutate(across(c(lat, lon), parse_number))\n# # A tibble: 3 \u00d7 3\n# ID lat lon\n# <dbl> <dbl> <dbl>\n# 1 1 10.1 -70.1\n# 2 2 20.1 -80.1\n# 3 3 30.1 -90.1 "}
{"questionId":"90c30112a449cb3609c82793d20f33eada30958c16ee040c94b4e69e28a0b493","question":"Generating combinations in pandas dataframe\nI have a dataset with [\"Uni\", 'Region', \"Profession\", \"Level\\_Edu\", 'Financial\\_Base', 'Learning\\_Time', 'GENDER'] columns. All values in [\"Uni\", 'Region', \"Profession\"] are filled while [\"Level\\_Edu\", 'Financial\\_Base', 'Learning\\_Time', 'GENDER'] always contain NAs.\n\n\nFor each column with NAs there are several possible values\n\n\n\n```\nLevel_Edu = ['undergrad', 'grad', 'PhD']\nFinancial_Base = ['personal', 'grant']\nLearning_Time = [\"morning\", \"day\", \"evening\"]\nGENDER = ['Male', 'Female']\n\n```\n\nI want to generate all possible combinations of [\"Level\\_Edu\", 'Financial\\_Base', 'Learning\\_Time', 'GENDER'] for each observation in the initial data. So that each initial observation would be represented by 36 new observations (obtained by the formula of combinatorics: N1 \\* N2 \\* N3 \\* N4, where Ni is the length of the i-th vector of possible values for a column)\n\n\nHere is a Python code for recreating two initial observations and approximation of the result I desire to get (showing 3 combinations out of 36 for each initial observation I want).\n\n\n\n```\nimport pandas as pd\nimport numpy as np\nsample_data_as_is = pd.DataFrame([[\"X1\", \"Y1\", \"Z1\", np.nan, np.nan, np.nan, np.nan], [\"X2\", \"Y2\", \"Z2\", np.nan, np.nan, np.nan, np.nan]], columns=[\"Uni\", 'Region', \"Profession\", \"Level_Edu\", 'Financial_Base', 'Learning_Time', 'GENDER'])\n\nsample_data_to_be = pd.DataFrame([[\"X1\", \"Y1\", \"Z1\", \"undergrad\", \"personal\", \"morning\", 'Male'], [\"X2\", \"Y2\", \"Z2\", \"undergrad\", \"personal\", \"morning\", 'Male'],\n [\"X1\", \"Y1\", \"Z1\", \"grad\", \"personal\", \"morning\", 'Male'], [\"X2\", \"Y2\", \"Z2\", \"grad\", \"personal\", \"morning\", 'Male'],\n [\"X1\", \"Y1\", \"Z1\", \"undergrad\", \"grant\", \"morning\", 'Male'], [\"X2\", \"Y2\", \"Z2\", \"undergrad\", \"grant\", \"morning\", 'Male']], columns=[\"Uni\", 'Region', \"Profession\", \"Level_Edu\", 'Financial_Base', 'Learning_Time', 'GENDER'])","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"You can combine [`itertools.product`](https:\/\/docs.python.org\/3\/library\/itertools.html#itertools.product) and a cross-[`merge`](https:\/\/pandas.pydata.org\/docs\/reference\/api\/pandas.DataFrame.merge.html):\n\n\n\n```\nfrom itertools import product\n\ndata = {'Level_Edu': ['undergrad', 'grad', 'PhD'],\n 'Financial_Base': ['personal', 'grant'],\n 'Learning_Time': ['morning', 'day', 'evening'],\n 'GENDER': ['Male', 'Female']}\n\nout = (sample_data_as_is[['Uni', 'Region', 'Profession']]\n .merge(pd.DataFrame(product(*data.values()), columns=data.keys()), how='cross')\n )\n\n```\n\nOutput:\n\n\n\n```\n Uni Region Profession Level_Edu Financial_Base Learning_Time GENDER\n0 X1 Y1 Z1 undergrad personal morning Male\n1 X1 Y1 Z1 undergrad personal morning Female\n2 X1 Y1 Z1 undergrad personal day Male\n3 X1 Y1 Z1 undergrad personal day Female\n4 X1 Y1 Z1 undergrad personal evening Male\n.. .. ... ... ... ... ... ...\n67 X2 Y2 Z2 PhD grant morning Female\n68 X2 Y2 Z2 PhD grant day Male\n69 X2 Y2 Z2 PhD grant day Female\n70 X2 Y2 Z2 PhD grant evening Male\n71 X2 Y2 Z2 PhD grant evening Female\n\n[72 rows x 7 columns]\n\n```\n\nIf you want the specific order of rows\/columns from your expected output:\n\n\n\n```\ncols = ['Uni', 'Region', 'Profession']\nout = (pd.DataFrame(product(*data.values()), columns=data.keys())\n .merge(sample_data_as_is[cols], how='cross')\n [cols+list(data)]\n )\n\n```\n\nOutput:\n\n\n\n```\n Uni Region Profession Level_Edu Financial_Base Learning_Time GENDER\n0 X1 Y1 Z1 undergrad personal morning Male\n1 X2 Y2 Z2 undergrad personal morning Male\n2 X1 Y1 Z1 undergrad personal morning Female\n3 X2 Y2 Z2 undergrad personal morning Female\n4 X1 Y1 Z1 undergrad personal day Male\n.. .. ... ... ... ... ... ...\n67 X2 Y2 Z2 PhD grant day Female\n68 X1 Y1 Z1 PhD grant evening Male\n69 X2 Y2 Z2 PhD grant evening Male\n70 X1 Y1 Z1 PhD grant evening Female\n71 X2 Y2 Z2 PhD grant evening Female\n\n[72 rows x 7 columns]"}
{"questionId":"5ad1374df3a88053bb0af6a92429d560fd2ce2fa636c5ce3a11c282a853acbba","question":"convert\\_time\\_zone` function to retrieve the values based on the timezone specified for each row in Polars\nI'm attempting to determine the time based on the timezone specified in each row using `Polars`. Consider the following code snippet:\n\n\n\n```\ndf = pl.DataFrame({\n \"time\": [datetime(2023, 4, 3, 2), datetime(2023, 4, 4, 3), datetime(2023, 4, 5, 4)],\n \"tzone\": [\"Asia\/Tokyo\", \"America\/Chicago\", \"Europe\/Paris\"]\n}).with_columns(c.time.dt.replace_time_zone(\"UTC\"))\n\ndf.with_columns(\n tokyo=c.time.dt.convert_time_zone(\"Asia\/Tokyo\").dt.hour(),\n chicago=c.time.dt.convert_time_zone(\"America\/Chicago\").dt.hour(),\n paris=c.time.dt.convert_time_zone(\"Europe\/Paris\").dt.hour()\n)\n\n```\n\nIn this example, I've computed the time separately for each timezone to achieve the desired outcome, which is [11, 22, 6], corresponding to the hour of the `time` column according to the `tzone` timezone. Even then it is difficult to collect the information from the correct column.\n\n\nUnfortunately, the following simple attempt to dynamically pass the timezone from the `tzone` column directly into the `convert_time_zone` function does not work:\n\n\n\n```\ndf.with_columns(c.time.dt.convert_time_zone(c.tzone).dt.hour())\n#TypeError: argument 'time_zone': 'Expr' object cannot be converted to 'PyString'\n\n```\n\nWhat would be the most elegant approach to accomplish this task?","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"The only way to do this which fully works with lazy execution is to use the `polars-xdt` plugin:\n\n\n\n```\ndf = pl.DataFrame(\n {\n \"time\": [\n datetime(2023, 4, 3, 2),\n datetime(2023, 4, 4, 3),\n datetime(2023, 4, 5, 4),\n ],\n \"tzone\": [\"Asia\/Tokyo\", \"America\/Chicago\", \"Europe\/Paris\"],\n }\n).with_columns(pl.col(\"time\").dt.replace_time_zone(\"UTC\"))\n\ndf.with_columns(\n result=xdt.to_local_datetime(\"time\", pl.col(\"tzone\")).dt.hour(),\n)\n\n```\n\nResult:\n\n\n\n```\nOut[6]:\nshape: (3, 3)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 time \u2506 tzone \u2506 result \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2502\n\u2502 datetime[\u03bcs, UTC] \u2506 str \u2506 i8 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 2023-04-03 02:00:00 UTC \u2506 Asia\/Tokyo \u2506 11 \u2502\n\u2502 2023-04-04 03:00:00 UTC \u2506 America\/Chicago \u2506 22 \u2502\n\u2502 2023-04-05 04:00:00 UTC \u2506 Europe\/Paris \u2506 6 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\n<https:\/\/github.com\/pola-rs\/polars-xdt>\n\n\nIf you don't need lazy execution, then as other answers have suggested, you can iterate over the unique values of your `'time_zone'` column"}
{"questionId":"890cfcca3b4a7de42f8b3125df6bef4f39a4ccf1f7010d39bdecacf9c3f6c131","question":"Consistent time zone for date comparison\nI am looking to pass in a date String and check if this date is before the current MST time.\n\n\nI want to be able to reliably test this locally and when I deploy it.\n\n\nLocally I am based in the UK. And when I deploy it, server is in Phoenix.\n\n\nThe time now in UK is `2024-06-28 15.45.00` and this logic produces true for isValidDate at present when I pass in `2024-06-28 15.00.00`.\n\n\nBut I am setting the zone to MST. I was expecting this to be false.\n \nMST time is like 8am now. So it's not before. It seems to continue to work against UK time.\n\n\nHow can I update this so that when I deploy it, it will check the date string against MST time.\nAnd locally continue to run for MST too? Essentially if I end up in another server in Australia, logic should continue to work against MST time.\n\n\n\n```\nprivate static final DateTimeFormatter DATE_FORMATTER = DateTimeFormatter.ofPattern(\"yyyy-MM-dd HH.mm.ss\");\nprivate static final ZoneId MST_TIMEZONE = ZoneId.of(\"America\/Phoenix\");\n\n\/\/ Spring bean set to Clock.systemDefaultZone(); Can't change this.\n\/\/ using a clock for unit testing purposes.\nprivate final Clock clock;\n\nprivate ZonedDateTime parseDate(String dateStr) {\n try {\n return LocalDateTime\n .parse(dateStr, DATE_FORMATTER)\n .atZone(MST_TIMEZONE);\n } catch (DateTimeParseException e) {\n return null;\n }\n}\n\nprivate boolean isValidDate(String startDateTime) {\n ZonedDateTime start = parseDate(startDateTime); \n return start != null\n && start.isBefore(LocalDateTime.now(clock).atZone(MST_TIMEZONE));\n}","questionMetadata":{"type":"implementation","tag":"java","level":"intermediate"},"answer":"I think the problem you've got here is with this:\n\n\n\n```\nLocalDateTime.now(clock).atZone(MST_TIMEZONE)\n\n```\n\nThis will do different things depending on the time zone of the JVM you are running it in.\n\n\n`LocalDateTime.now(clock)` will give you the local time in the JVM's timezone - since we're both in London, let's say that `2024-06-28 16:46:23`. Invoking `atZone(MST)` on that gives you a `ZonedDateTime` which is `2024-06-28 16:46:23 -08:00`.\n\n\nIf you had run that on a server in Phoenix, `LocalDateTime.now(clock)` would have got `2024-06-28 08:46:23`; invoking `atZone(MST)` on that gives you `2024-06-28 08:46:23 -08:00`.\n\n\nIf your intention is to get the current time in `MST_TIMEZONE`, change it to:\n\n\n\n```\nclock.now().atZone(MST_TIMEZONE)\n\n```\n\n`clock.now()` gives you an `Instant`, which is time zone-agnostic type. The `Instant` corresponding to the time I wrote above is `Instant.ofSeconds(1719593183L)`. Converting that to a `ZonedDateTime` gives the `LocalDateTime` in that zone, plus the zone."}
{"questionId":"f29debf8f7c11cbff747936901eb295cb9181108817898d0247ce2e7f4ca5ced","question":"which one should I used in order to check the uniqueness of enum values in python?\nI have an enum in Python, which I want to make sure that its values are unique. I see that there are 2 ways I can use to achieve it:\n\n\n- Wrapping the class with `@verify(UNIQUE)`\n- Wrapping the class with `@unique`\n\n\nWhat is the difference with using each one of them? Which one should I use to gain the best performance?","questionMetadata":{"type":"conceptual","tag":"python","level":"intermediate"},"answer":"You can use which ever you prefer. In terms of implementation the both are the same and I mean the same, its literally copy pasted code in enum.py in in the cpython repo.\nThis [code](https:\/\/github.com\/python\/cpython\/blob\/5dc8c84d397110f9edfa56793ad8887b1f176d79\/Lib\/enum.py#L1916C1-L1926C58) is for the `@verify(Unique)` and [this one](https:\/\/github.com\/python\/cpython\/blob\/5dc8c84d397110f9edfa56793ad8887b1f176d79\/Lib\/enum.py#L1636) for the `@unique`.\n\n\nI would suggest to use `@verify` if you have other checks as well but if you only want to check for unique it would be better to use `@unique` since it will only reference the code needed."}
{"questionId":"d96238476f8bc98a3a1e9c57a30a4fe908729031ab42e956f8cfca0c33596aef","question":"An Interesting theoretical graph theory problem\nMy dad recently introduced me to a difficult problem he is trying to solve. The original problem arose while trying to find the optimal way to perform certain sql operations. I will be rephrasing the problem to make it far more concise and because I\u2019m only interested in the theoretical algorithm and not the particular implementation.\n\n\nHere is my colorful recreation of the problem:\n\n\nYou need to make everyone like you. You can make a targeted person like you by bribing them directly or sometimes by bribing a specific group of people, who already like you, to convince the target to like you.\n\n\nGiven the total number of people N, and a list L of K bribing options find out the cheapest way to make everyone like you.\n\n\nEach bribing option shall either contain\n\n\n1. Two numbers with the first number representing the person to bribe into liking you and the second number representing the cost of the bribe.\n2. A list of numbers representing the group of people to bribe, a number representing the person to convince to like you, and a final number representing the total cost of the bribe. In order to perform a group bribe, you must of already made every individual in that group like you.\n\n\nHere is an example:\nPeople: 3\nOptions:\n\n\n- 1, 10 (you can bribe person 1 to like you for a cost of 10)\n- 1, 2, 5 (you can bribe person 1 to make person 2 like you for a cost of 5)\n- 2, 7 (you can bribe person 2 to like you for a cost of 7)\n- 1, 2, 3, 3 (you can bribe person 1 and 2 to make person 3 like you for a cost of 3)\n\n\nIn this example the optimal solution is to use the first, second, and fourth options to make everyone like you for a total cost of 18.\n\n\nCan y\u2019all help me find a generalized algorithm that will work for any case?\n\n\nThere aren\u2019t concrete number limits, but I can say that the number of people is almost always going to be small (< 10) and the number of bribing options could be very large (N \\* (2^(N-1))).","questionMetadata":{"type":"conceptual","tag":"other","level":"advanced"},"answer":"Since n (number of people) is small, an approach inefficient in n may be okay.\n\n\nLet's think of the digraph like this. Each node is a state, where each of the n people is either bribed or not bribed. Then, there are 2^n nodes.\n\n\nEach node gets an arc to each node that is immediately reachable from it, i.e., that consists of the same set of bribed people as the node in question, plus one additional. The weight\/cost of each arc is the cheapest cost of going between the nodes.\n\n\nOnce the weights are assigned, it's a matter of using your favorite algorithm to find the shortest-path in a DAG with (presumably) nonnegative weights. <https:\/\/en.wikipedia.org\/wiki\/Shortest_path_problem>\n\n\nSo how do we assign those weights? Here's one approach:\n\n\nFirst, give each node an id where the i'th bit is set if the i'th person is bribed. So the initial state node gets an id of 0, its children get ids that have 1 bit set, and so on.\n\n\nNext, associate the id that goes to each group that can be used for a bribe with the cost & id of the person to be bribed. Since one group can be used to bribe multiple people, represent this as a map {person\\_to\\_bribe => cost}.\n\n\nNext, parse all nodes in order of their distance from node 0. Each nodes map gets created\/updated based on its predecessors, with the cost to each node being the min of its and its predecessors maps. Once this is done for depth i, assign the appropriate weight to the arcs out of each depth i node, based on the updated map.\n\n\nThis is not efficient, but since n < 10 this should be fine.\n\n\n\n```\nnodes: 2^n <= 2^9 = 512\narcs: choose(n,0) * choose(n,1) + choose(n,1) + choose(n,2) + ... + choose(n,n-1) * choose(n,n) <= 43,758 (equals this for n=9)"}
{"questionId":"9eecd1c84765dd158204ffbf72e81265b485963807607417e1fd388cc094ee75","question":"Meaning of \"trivial eligible constructor\" for implicit lifetime types and memcpy\nSay you have a type that is trivially copyable, not an aggregate, and not trivially constructible:\n\n\n\n```\nstruct Foo\n{\n Foo() = default;\n\n Foo(int i)\n : a(i)\n {};\n\n int a = 5;\n};\n\n```\n\nFoo isn't an aggregate because it has a user-declared constructor, it isn't trivially constructible because it has user-defined initialisers, and it is trivially copyable and trivially destructible. It isn't a trivial type.\n\n\nIs it legal to attempt to implicitly construct such a type via memcpy?\n\n\n\n```\nFoo* makeFooCopy(const Foo& src)\n{\n \/\/ Assume alignment isn't a problem\n auto ptr = malloc(sizeof(Foo));\n\n memcpy(ptr, &src, sizeof(Foo));\n\n return reinterpret_cast<Foo*>(ptr);\n}\n\n```\n\ncppreference says that an implicit lifetime type \"...has at least one trivial eligible constructor and a trivial, non-deleted destructor.\" (the aggregate case does not apply here). But it's not clear to me what the \"trivial eligible constructor\" is here; must it be a default constructor (i.e. this is just stating that the type needs to be trivially default constructible) or is the ability to trivially copy the object sufficient?\n\n\nThe motivating issue is a vector-like type in our code; profiling shows that in a specific use case a significant amount of our run time consists of copying contiguous containers of trivially copyable but not trivially default constructible types into our vector-like type, which is currently implemented as a loop around emplace\\_back. We would like to just use memcpy to copy the entire buffer, like so:\n\n\n\n```\ntemplate<MemcpyableContainer C>\nSpecialVectorType(const C& container)\n{\n resize(std::size(container));\n\n memcpy(our_storage, std::addressof(*std::begin(container)), std::size(container) * sizeof(element_type))\n}\n\n```\n\nbut our compiler isn't optimising out the placement new calls in resize. It's not clear to me if it's legal to elide them.","questionMetadata":{"type":"version","tag":"c++","level":"advanced"},"answer":"\"At least one trivial eligible constructor\" means any of the three constructors(default constructor\/copy constructor\/move constructor) is trivial.\n\n\nThe type trait [std::is\\_implicit\\_lifetime](https:\/\/en.cppreference.com\/w\/cpp\/types\/is_implicit_lifetime) is added in C++23.\n\n\nHere is the implementation in the related draft [p2674](https:\/\/www.open-std.org\/jtc1\/sc22\/wg21\/docs\/papers\/2022\/p2674r1.pdf)\n\n\n\n```\ntemplate <typename T>\nstruct is_implicit_lifetime\n : std::disjunction<\n std::is_scalar<T>, std::is_array<T>, std::is_aggregate<T>,\n std::conjunction<\n std::is_trivially_destructible<T>,\n std::disjunction<std::is_trivially_default_constructible<T>,\n std::is_trivially_copy_constructible<T>,\n std::is_trivially_move_constructible<T>>>> {};\n\n\n```\n\nThere's no doubt class `Foo` is an *implicit-lifetime type*. It is well defined to implicitly create such objects via `memcpy`."}
{"questionId":"2b41f4c26dfdad5e0fe6b89e08295df070e4fdd414945a05ccfeb05b43b18567","question":"Reduce the sum of differences between adjacent array elements\nI came across a coding challenge on the internet the question is listed below:\n\n\n\n> \n> Have the function FoodDistribution(arr) read the array of numbers\n> stored in arr which will represent the hunger level of different\n> people ranging from 0 to 5 (0 meaning not hungry at all, 5 meaning\n> very hungry). You will also have N sandwiches to give out which will\n> range from 1 to 20. The format of the array will be [N, h1, h2, h3,\n> ...] where N represents the number of sandwiches you have and the rest\n> of the array will represent the hunger levels of different people.\n> Your goal is to minimize the hunger difference between each pair of\n> people in the array using the sandwiches you have available.\n> \n> \n> For example: if arr is [5, 3, 1, 2, 1], this means you have 5\n> sandwiches to give out. You can distribute them in the following order\n> to the people: 2, 0, 1, 0. Giving these sandwiches to the people their\n> hunger levels now become: [1, 1, 1, 1]. The difference between each\n> pair of people is now 0, the total is also 0, so your program should\n> return 0. Note: You may not have to give out all, or even any, of your\n> sandwiches to produce a minimized difference.\n> \n> \n> Another example: if arr is [4, 5, 2, 3, 1, 0] then you can distribute\n> the sandwiches in the following order: [3, 0, 1, 0, 0] which makes all\n> the hunger levels the following: [2, 2, 2, 1, 0]. The differences\n> between each pair of people is now: 0, 0, 1, 1 and so your program\n> should return the final minimized difference of 2.\n> \n> \n> \n\n\nMy first approach was to try to solve it greedily as the following:\n\n\n1. Loop until the sandwiches are zero\n2. For each element in the array copy the array and remove one hunger at location i\n3. Get the best combination that will give you the smallest hunger difference\n4. Reduce the sandwiches by one and consider the local min as the new hunger array\n5. Repeat until sandwiches are zero or the hunger difference is zero\n\n\nI thought when taking the local minimum it led to the global minimum which was wrong based on the following use case `[7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]`\n\n\n\n```\ndef FoodDistribution(arr):\n sandwiches = arr[0]\n hunger_levels = arr[1:]\n\n # Function to calculate the total difference\n def total_difference(hunger_levels):\n return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1))\n\n def reduce_combs(combs):\n local_min = float('inf')\n local_min_comb = None\n for comb in combs:\n current_difference = total_difference(comb)\n if current_difference < local_min:\n local_min = current_difference\n local_min_comb = comb\n\n return local_min_comb\n # Function to distribute sandwiches\n def distribute_sandwiches(sandwiches, hunger_levels):\n global_min = total_difference(hunger_levels)\n print(global_min)\n while sandwiches > 0 and global_min > 0:\n combs = []\n for i in range(len(hunger_levels)):\n comb = hunger_levels[:]\n comb[i] -= 1\n combs.append(comb)\n\n local_min_comb = reduce_combs(combs)\n x = total_difference(local_min_comb)\n print( sandwiches, x, local_min_comb)\n global_min = min(global_min, x)\n hunger_levels = local_min_comb\n sandwiches -= 1\n return global_min\n\n # Distribute sandwiches and calculate the minimized difference\n global_min = distribute_sandwiches(sandwiches, hunger_levels)\n return global_min\n\nif __name__ == \"__main__\":\n print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]))\n\n```\n\nI changed my approach to try to brute force and then use memorization to optimize the time complexity\n\n\n1. Recurse until out of bounds or sandwiches are zero\n2. For each location there are two options either to use a sandwich or ignore\n3. When the option is to use a sandwich decrement sandwiches by one and stay at the same index.\n4. When the option is to ignore increment the index by one.\n5. Take the minimum between the two options and return it.\n\n\nThe issue here is that I didn't know what to store in the memo and storing the index and sandwiches is not enough. I am not sure if this problem has a better complexity than 2^(n+s). Is there a way to know if dynamic programming or memorization is not the way to solve the problem and in this case can I improve the complexity by memorization or does this problem need to be solved with a different approach?\n\n\n\n```\ndef FoodDistribution(arr):\n sandwiches = arr[0]\n hunger_levels = arr[1:]\n\n # Distribute sandwiches and calculate the minimized difference\n global_min = solve(0, sandwiches, hunger_levels)\n return global_min\n\n\ndef solve(index, sandwiches, hunger_levels):\n if index >= len(hunger_levels) or sandwiches == 0:\n return total_difference(hunger_levels)\n\n # take a sandwich\n hunger_levels[index] += -1\n sandwiches += -1\n minTake = solve(index, sandwiches, hunger_levels)\n hunger_levels[index] += 1\n sandwiches += 1\n\n # dont take sandwich\n dontTake = solve(index + 1, sandwiches, hunger_levels)\n\n return min(minTake, dontTake)\n\n\ndef total_difference(hunger_levels):\n return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1))\n\nif __name__ == \"__main__\":\n print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5]))\n\n```\n\n**Edit:** Multiple states will give you the optimal answer for the use case above\n\n\n\n```\nsandwiches = 7 \nhunger = [5, 4, 3, 4, 5, 2, 3, 1, 4, 5]\noptimal is 6\nstates as follow\n[3, 3, 3, 3, 3, 2, 2, 1, 4, 5]\n[4, 3, 3, 3, 3, 2, 2, 1, 4, 4]\n[4, 4, 3, 3, 2, 2, 2, 1, 4, 4]\n[4, 4, 3, 3, 3, 2, 1, 1, 4, 4]\n[4, 4, 3, 3, 3, 2, 2, 1, 3, 4]\n[4, 4, 3, 3, 3, 2, 2, 1, 4, 4]\n[5, 4, 3, 3, 3, 2, 2, 1, 3, 3]\n\n```\n\n**Note:** I accepted @Matt Timmermans answer as it provides the best time complexity n and nlogn. But the two other answer are amazing and good to understand and be able to implement the solution using dynamic programming or memorization. Personally I prefer the memorization version expected time complexity is s*n*h where h is the max hunger level in the array.","questionMetadata":{"type":"implementation","tag":"python","level":"intermediate"},"answer":"The sum of the absolute differences only goes down when you reduce a local maximum.\n\n\nIf you reduce a maximum on either end, the sum of differences goes down by one, like `[3,2,1]` -> `[2,2,1]`.\n\n\nIf you reduce a maximum in the middle, the sum of differences goes down by two, like `[1,3,2]` -> `[1,2,2]`.\n\n\nIf a maximum gets reduced, it may merge into another maximum that you can reduce, but the new maximum will never be cheaper or more cost effective. It can only get wider, like `[1,3,2]` -> `[1,2,2]`.\n\n\nThe optimal strategy is, therefore, just to repeatedly reduce the most cost-effective maximum, in terms of `benefit\/width`, that you have enough sandwiches to reduce. `benefit` is 1 for maximums on the ends or 2 for maximums in the middle.\n\n\nStop when you no longer have enough sandwiches to reduce the narrowest maximum.\n\n\nYou can do this in O(n) time by finding all the maximums and keeping them in a priority queue to process them in the proper order as they are reduced.\n\n\nO(n log n) is easy. In order to make that O(n) bound, you'll need to use a counting-sort-type priority queue instead of a heap. You also need to be a little clever about keeping track of the regions of the array that are known to be at the same height so you can merge them in constant time.\n\n\nHere's an O(n) implementation in python\n\n\n\n```\ndef distribute(arr):\n\n foodLeft = arr[0]\n hungers = arr[1:]\n\n # For each element in hungers, calculate number of adjacent elements at same height\n spans = [1] * len(hungers)\n for i in range(1, len(hungers)):\n if hungers[i-1]==hungers[i]:\n spans[i] = spans[i-1]+1\n for i in range(len(hungers)-2, -1, -1):\n if hungers[i+1]==hungers[i]:\n spans[i] = spans[i+1]\n\n # spans are identified by their first element. Only the counts and hungers on the edges need to be correct\n\n # if a span is a maximum, it's height. Otherwise 0\n def maxHeight(left):\n ret = len(spans)\n if left > 0:\n ret = min(ret, hungers[left] - hungers[left-1])\n right = left + spans[left]-1\n if right < len(spans)-1:\n ret = min(ret, hungers[right] - hungers[right+1])\n return max(ret,0)\n \n # change the height of a span and return the maybe new span that it is a part of\n def reduce(left, h):\n right = left + spans[left] - 1\n hungers[left] -= h\n hungers[right] = hungers[left]\n if right < len(spans)-1 and hungers[right+1] == hungers[right]:\n # merge on the right\n w = spans[right+1]\n spans[right] = spans[right+1] = 0 # for debuggability\n right += w\n if left > 0 and hungers[left-1] == hungers[left]:\n # merge on left\n w = spans[left-1]\n spans[left] = spans[left-1] = 0 # for debuggability\n left -= w\n spans[left] = spans[right] = right - left + 1\n return left\n \n def isEdge(left):\n return left < 1 or left + spans[left] >= len(spans)\n \n # constant-time priority queue for non-edge spans\n # it's just a list of spans per width\n pq = [[] for _i in range(len(spans)+1)]\n\n # populate priority queue\n curspan = 0\n while curspan < len(spans):\n width = spans[curspan]\n if maxHeight(curspan) > 0 and not isEdge(curspan):\n pq[width].append(curspan)\n curspan += width\n\n # this will be True at the end if we can sacrifice one edge max selection to get one\n # mid max selection, which would produce one more point\n canBacktrack = False\n # process mid spans in order\n curpri = 1\n # while not all hungers are the same\n while spans[0] < len(spans):\n\n # find the best middle maximum\n bestmid = None\n midwidth = None\n if curpri < len(pq) and curpri <= foodLeft:\n if len(pq[curpri]) == 0:\n curpri += 1\n continue\n bestmid = pq[curpri][-1]\n midwidth = spans[bestmid]\n\n # find the best edge maximum\n bestedge = None\n edgewidth = None\n if maxHeight(0) > 0 and foodLeft >= spans[0]:\n bestedge = 0\n edgewidth = spans[0]\n r = len(spans)-spans[-1]\n if maxHeight(r) > 0 and foodLeft >= spans[r] and (bestedge == None or spans[r] < edgewidth):\n bestedge = r\n edgewidth = spans[r]\n\n # choose\n bestspan = None\n h = 0\n if bestedge == None:\n if bestmid == None:\n break\n bestspan = bestmid\n bestwidth = midwidth\n canBacktrack = False\n elif bestmid == None:\n bestspan = bestedge\n bestwidth = edgewidth\n canBacktrack = False\n elif midwidth <= edgewidth*2:\n # mid maximum is more cost effective\n # OR choo\n bestspan = bestmid\n bestwidth = midwidth\n canBacktrack = False\n else:\n bestspan = bestedge\n bestwidth = edgewidth\n # tentative\n canBacktrack = True\n \n if bestspan == bestmid:\n # chose the middle span -- remove from pq\n pq[curpri].pop()\n\n # how much we can reduce this maxium by\n h = min(foodLeft\/\/bestwidth, maxHeight(bestspan))\n foodLeft -= bestwidth*h\n canBacktrack = canBacktrack and foodLeft < midwidth and foodLeft + edgewidth >= midwidth\n bestspan = reduce(bestspan, h)\n if maxHeight(bestspan) > 0 and not isEdge(bestspan):\n pq[spans[bestspan]].append(bestspan)\n \n # finally, calculate the new total diffs\n totaldiff = 0\n curspan = spans[0]\n while curspan < len(spans):\n totaldiff += abs(hungers[curspan] - hungers[curspan-1])\n curspan += spans[curspan]\n if canBacktrack:\n totaldiff -= 1\n return totaldiff\n\n# test\ncases = [\n [8, 11, 14, 15, 16, 13, 2, 3],\n [7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5],\n [2, 4, 4, 3, 4, 5],\n [3, 3, 4, 4, 4, 3, 4],\n [4, 3, 4, 4, 4, 3, 5],\n [5, 3, 4, 4, 4, 3, 6],\n [3, 3, 4, 4, 3, 4, 5]\n]\nfor case in cases:\n print(\"{0}: {1}\".format(case, distribute(case)))"}
{"questionId":"e29bebd94a2a1d51bf948f969c25489168101cd1d751aa02424a681c3120d2c6","question":"Memory leaking in .NET HttpClient, JsonSerializer or misused Stream?\nI have a basic background class in an otherwise empty ASP.NET Core 8 Minimal API project.\n\n\nApp startup is just:\n\n\n\n```\nbuilder.Services.AddHttpClient();\nbuilder.Services.AddHostedService<SteamAppListDumpService>();\n\n```\n\nThe background class is for saving snapshots of a Steam API endpoint, all basic stuff:\n\n\n\n```\npublic class SteamAppListDumpService : BackgroundService\n{\n static TimeSpan RepeatDelay = TimeSpan.FromMinutes(30);\n private readonly IHttpClientFactory _httpClientFactory;\n\n private string GetSteamKey() => \"...\";\n\n private string GetAppListUrl(int? lastAppId = null)\n {\n return $\"https:\/\/api.steampowered.com\/IStoreService\/GetAppList\/v1\/?key={GetSteamKey()}\" +\n (lastAppId.HasValue ? $\"&last_appid={lastAppId}\" : \"\");\n }\n\n public SteamAppListDumpService(IHttpClientFactory httpClientFactory)\n {\n _httpClientFactory = httpClientFactory;\n }\n\n protected override async Task ExecuteAsync(CancellationToken stoppingToken)\n {\n while (!stoppingToken.IsCancellationRequested)\n {\n await DumpAppList();\n await Task.Delay(RepeatDelay, stoppingToken);\n }\n }\n\n public record SteamApiGetAppListApp(int appid, string name, int last_modified, int price_change_number);\n public record SteamApiGetAppListResponse(List<SteamApiGetAppListApp> apps, bool have_more_results, int last_appid);\n public record SteamApiGetAppListOuterResponse(SteamApiGetAppListResponse response);\n\n protected async Task DumpAppList()\n {\n try\n {\n var httpClient = _httpClientFactory.CreateClient();\n var appList = new List<SteamApiGetAppListApp>();\n int? lastAppId = null;\n do\n {\n using var response = await httpClient.GetAsync(GetAppListUrl(lastAppId));\n if (!response.IsSuccessStatusCode) throw new Exception($\"API Returned Invalid Status Code: {response.StatusCode}\");\n\n var responseString = await response.Content.ReadAsStringAsync();\n var responseObject = JsonSerializer.Deserialize<SteamApiGetAppListOuterResponse>(responseString)!.response;\n appList.AddRange(responseObject.apps);\n lastAppId = responseObject.have_more_results ? responseObject.last_appid : null;\n\n } while (lastAppId != null);\n\n var contentBytes = JsonSerializer.SerializeToUtf8Bytes(appList);\n using var output = File.OpenWrite(Path.Combine(Config.DumpDataPath, DateTime.UtcNow.ToString(\"yyyy-MM-dd__HH-mm-ss\") + \".json.gz\"));\n using var gz = new GZipStream(output, CompressionMode.Compress);\n gz.Write(contentBytes, 0, contentBytes.Length);\n }\n catch (Exception ex)\n {\n Trace.TraceError(\"skipped...\");\n }\n }\n}\n\n```\n\nThe API returns approx 16 MB of data in total, then it compresses\/saves it to a 4 MB file, every 30 minutes, nothing else. In between runs, when the garbage collector runs I would expect the memory consumption to drop to almost nothing, but it increases over time, as an example it's been running for 2 hours on my PC and is consuming 700MB memory. On my server it's been running for 24 hours and is now consuming 2.5 GB memory.\n\n\nAs far as I can tell all the streams are disposed, `HttpClient` is created using the recommended `IHttpClientFactory`, does anyone know why this basic functionality is consuming so much memory even after garbage collection? I've tried looking at it in the VS manage memory dump but can't find much useful. Does this point to a memory leak in one of the classes (i.e. `HttpClient` \/ `SerializeToUtf8Bytes`) or am I missing something?\n\n\nThe `responseString` and `contentBytes` are usually around 2MB.","questionMetadata":{"type":"debugging","tag":"c#","level":"intermediate"},"answer":"Any time you allocate a contiguous block of memory >= 85,000 bytes in size, it goes into the [large object heap](https:\/\/learn.microsoft.com\/en-us\/dotnet\/standard\/garbage-collection\/large-object-heap). Unlike the regular heap it isn't compactified unless you do so [manually](https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.runtime.gcsettings.largeobjectheapcompactionmode)[1] so it can grow due to fragmentation giving the appearance of a memory leak. See *[Why Large Object Heap and why do we care?](https:\/\/stackoverflow.com\/q\/8951836)*.\n\n\nAs your `responseString` and `contentBytes` are usually around 2 MB I would recommend rewriting your code to eliminate them. Instead, asynchronously stream directly from your server and to your JSON file using the relevant built-in APIs like so:\n\n\n\n```\nconst int BufferSize = 16384;\nconst bool UseAsyncFileStreams = true; \/\/https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.io.filestream.-ctor?view=net-5.0#System_IO_FileStream__ctor_System_String_System_IO_FileMode_System_IO_FileAccess_System_IO_FileShare_System_Int32_System_Boolean_\n\nprotected async Task DumpAppList()\n{\n try\n {\n var httpClient = _httpClientFactory.CreateClient();\n var appList = new List<SteamApiGetAppListApp>();\n int? lastAppId = null;\n do\n {\n \/\/ Get the SteamApiGetAppListOuterResponse directly from JSON using HttpClientJsonExtensions.GetFromJsonAsync() without the intermediate string.\n \/\/ https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.net.http.json.httpclientjsonextensions.getfromjsonasync\n \/\/ If you need customized error handling see \n \/\/ https:\/\/stackoverflow.com\/questions\/65383186\/using-httpclient-getfromjsonasync-how-to-handle-httprequestexception-based-on\n var responseObject = (await httpClient.GetFromJsonAsync<SteamApiGetAppListOuterResponse>(GetAppListUrl(lastAppId)))\n !.response;\n appList.AddRange(responseObject.apps);\n lastAppId = responseObject.have_more_results ? responseObject.last_appid : null;\n\n } while (lastAppId != null);\n\n await using var output = new FileStream(Path.Combine(Config.DumpDataPath, DateTime.UtcNow.ToString(\"yyyy-MM-dd__HH-mm-ss\") + \".json.gz\"),\n FileMode.Create, FileAccess.Write, FileShare.None, bufferSize: BufferSize, useAsync: UseAsyncFileStreams);\n await using var gz = new GZipStream(output, CompressionMode.Compress);\n \/\/ See https:\/\/faithlife.codes\/blog\/2012\/06\/always-wrap-gzipstream-with-bufferedstream\/ for a discussion of buffer sizes vs compression ratios.\n await using var buffer = new BufferedStream(gz, BufferSize);\n \/\/ Serialize directly to the buffered, compressed output stream without the intermediate in-memory array.\n await JsonSerializer.SerializeAsync(buffer, appList);\n }\n catch (Exception ex)\n {\n Trace.TraceError(\"skipped...\");\n }\n}\n\n```\n\nNotes:\n\n\n- [`GZipStream`](https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.io.compression.gzipstream) does not buffer its input so there is a chance that streaming to it incrementally can result in worse compression ratios. However, as discussed by Bradley Grainger in [Always wrap GZipStream with BufferedStream](https:\/\/faithlife.codes\/blog\/2012\/06\/always-wrap-gzipstream-with-bufferedstream\/), buffering the incremental writes using a buffer that is 8K or larger effectively eliminates the problem.\n- According to the [docs](https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.io.filestream.-ctor?view=net-8.0#system-io-filestream-ctor(system-string-system-io-filemode-system-io-fileaccess-system-io-fileshare-system-int32-system-boolean)), the `useAsync` argument to the `FileStream` constructor\n\n\n\n> \n> Specifies whether to use asynchronous I\/O or synchronous I\/O. However, note that the underlying operating system might not support asynchronous I\/O, so when specifying true, the handle might be opened synchronously depending on the platform. When opened asynchronously, the [BeginRead(Byte[], Int32, Int32, AsyncCallback, Object)](https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.io.filestream.beginread?view=net-5.0#System_IO_FileStream_BeginRead_System_Byte___System_Int32_System_Int32_System_AsyncCallback_System_Object_) and [BeginWrite(Byte[], Int32, Int32, AsyncCallback, Object)](https:\/\/learn.microsoft.com\/en-us\/dotnet\/api\/system.io.filestream.beginwrite?view=net-5.0#System_IO_FileStream_BeginWrite_System_Byte___System_Int32_System_Int32_System_AsyncCallback_System_Object_) methods perform better on large reads or writes, but they might be much slower for small reads or writes. If the application is designed to take advantage of asynchronous I\/O, set the useAsync parameter to true. Using asynchronous I\/O correctly can speed up applications by as much as a factor of 10, but using it without redesigning the application for asynchronous I\/O can decrease performance by as much as a factor of 10.\n> \n> \n> \n\n\nThus you may need to test to see whether, in practice, you get better performance with `UseAsyncFileStreams` equal to `true` or `false`. You may also need to play around with the buffer sizes to get the best performance and compression ratio -- always being sure to keep the buffer smaller than 85,000 bytes.\n- If you think large object heap fragmentation may be a problem, see the MSFT article [The large object heap on Windows systems: A debugger](https:\/\/learn.microsoft.com\/en-us\/dotnet\/standard\/garbage-collection\/large-object-heap#a-debugger) for suggestions on how to investigate further.\n- Since your `DumpAppList()` method only runs every half hour anyway, you might try compacting the large object heap manually after each run to see if that helps:\n\n\n\n```\n protected override async Task ExecuteAsync(CancellationToken stoppingToken)\n {\n while (!stoppingToken.IsCancellationRequested)\n {\n await DumpAppList();\n GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;\n GC.Collect(); \n\n await Task.Delay(RepeatDelay, stoppingToken);\n }\n }\n\n```\n- You may want to pass the `CancellationToken stoppingToken` into `DumpAppList()`.\n\n\n\n\n---\n\n\n[1] Do note that, in [Memory management and garbage collection (GC) in ASP.NET Core: Large object heap](https:\/\/learn.microsoft.com\/en-us\/aspnet\/core\/performance\/memory?view=aspnetcore-8.0#large-object-heap), MSFT writes:\n\n\n\n> \n> In containers using .NET Core 3.0 and later, the LOH is automatically compacted.\n> \n> \n> \n\n\nSo my statement about when LOH compaction occurs may be out of date on certain platforms."}
{"questionId":"6f816570f8878a2a7eccaab2cf1c677e43b7ae0a022fb8b4195fd1f217d1179d","question":"Clean way to check if variable is list of lists using pattern matching\nIn my code, I need to distinguish a list of records from a list of lists of records. The existing code does it like so:\n\n\n\n```\nif isinstance(input_var, list):\n if len(input_var) > 0:\n if all(isinstance(item) for item in input_var):\n return list_of_lists_processing(input_var)\n elif not any(instance(item, list) for item in input_var):\n return list_of_data_processing(input_var)\n else:\n raise ValueError(f\"Unexpected input_var value {input_var}\")\n else:\n return list()\nelse:\n raise ValueError(f\"Unexpected input_var value {input_var}\")\n\n```\n\nHowever, this seems ugly. I want to use Python 3.10's pattern matching to simplify the code. I came up with this version:\n\n\n\n```\nmatch input_var:\n case [list(), *_]:\n return list_of_lists_processing(input_var)\n case list():\n # also process empty list case\n return list_of_data_processing(input_var)\n case _:\n ValueError(f\"Unexpected value {input_var=}\")\n\n```\n\nBut there is a flaw here: `case [*list(), *_]` only checks the first element of input\\_var, not all of them. In practice, this is enough for my purposes, but I want to ask anyway: **is there a clean way to match only a list where every element is a list**?\n\n\nI tried `case [*list()]:`, but this causes a `SyntaxError`. `case list(list()):` is syntactically correct, but doesn't work as expected (for example, it matches `[\"a\"]` - what is going on here?)","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"You can match the set of item types inside your list.\n\n\n\n```\nclass Constants:\n set_of_list = {list}\n\nmatch set(type(elem) for elem in input_var):\n case Constants.set_of_list:\n return list_of_lists_processing(input_var)\n case types if list in types:\n raise ValueError(f\"Unexpected input_var value {input_var}\")\n case _:\n return list_of_data_processing(input_var)\n\n```\n\nPython3.10 does not support matching values inside a set so you still have to check if `list` is one of the types.\n\n\nThe Constants class is used to trigger a value pattern. Raymond Hettinger made [a great talk](https:\/\/www.youtube.com\/watch?v=ZTvwxXL37XI) to explain this and other concepts related to pattern matching."}
{"questionId":"252a9d699a8d22d9169c9d27ef540041f7ec1a7056a1e3b26b4f34eb75b85c9d","question":"Rails 7 way of auto-loading methods into controllers via engine\nI'm looking into updating one of my favorite CMSs to Rails 7 that have been archived on github (PushType). Only I haven't coded Rails since Rails 6. Apparently, something about autoloading methods changed in Rails 7. I am getting this error:\n\n\n\n```\nNameError: uninitialized constant PushType::ApplicationControllerMethods\n include PushType::ApplicationControllerMethods\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n```\n\nfor this line in the engine:\n\n\n\n```\n initializer 'push_type.application_controller' do\n ActiveSupport.on_load :action_controller do\n # ActionController::Base.send :include, PushType::ApplicationControllerMethods\n include PushType::ApplicationControllerMethods\n end\n end\n\n```\n\n- the engine is located in root\/core\/lib\/push\\_type\/core\/engine.rb\n- the location of the definition of the controller methods in question is at: root\/core\/app\/controllers\/concerns -- within concerns directory namespacing should work but it isn't, in the controller concerns directory the methods are found in push\\_type\/application\\_controller\\_methods.rb\n\n\nI don't know what I'm doing given my hiatus from the language. but in my attempt to resolve this I have tried autoloading that concerns directory within the gem's engine like this:\n\n\n\n```\n config.autoload_paths << File.expand_path(\"..\/..\/..\/app\/controllers\/concerns\", __FILE__) <<\n # File.expand_path(\"..\/..\/..\/app\/controllers\/concerns\/push_type\", __FILE__)\n File.expand_path(\"..\/..\/..\/app\/helpers\", __FILE__)\n\n```\n\nthe full engine.rb file looks like this:\n\n\n\n```\nmodule PushType\n module Core\n class Engine < ::Rails::Engine\n isolate_namespace PushType\n engine_name 'push_type'\n\n config.autoload_paths << File.expand_path(\"..\/..\/..\/app\/controllers\/concerns\", __FILE__) <<\n # File.expand_path(\"..\/..\/..\/app\/controllers\/concerns\/push_type\", __FILE__)\n File.expand_path(\"..\/..\/..\/app\/helpers\", __FILE__)\n\n # config.autoload_paths << \"#{root}\/app\/controllers\/concerns\" <<\n # \"#{root}\/app\/controllers\/concerns\/push_type\" <<\n # \"#{root}\/app\/helpers\"\n\n # lib = root.join(\"lib\")\n # config.autoload_once_paths.ignore(\n # lib.join(\"assets\"),\n # lib.join(\"tasks\"),\n # lib.join(\"generators\")\n # )\n\n config.generators do |g|\n g.assets false\n g.helper false\n g.test_framework :test_unit, fixture: false\n g.hidden_namespaces << 'push_type:dummy' << 'push_type:field'\n end\n\n config.assets.precompile += %w(\n *.gif *.jpg *.png *.svg *.eot *.ttf *.woff *.woff2\n )\n\n config.to_prepare do\n Rails.application.eager_load! unless Rails.application.config.cache_classes\n end\n\n initializer 'push_type.dragonfly_config' do\n PushType.config.dragonfly_secret ||= Rails.application.secrets.secret_key_base\n PushType.dragonfly_app_setup!\n end\n\n initializer 'push_type.application_controller' do\n ActiveSupport.on_load :action_controller do\n # ActionController::Base.send :include, PushType::ApplicationControllerMethods\n include PushType::ApplicationControllerMethods\n end\n end\n\n initializer 'push_type.menu_helpers' do\n ActiveSupport.on_load :action_view do\n include PushType::MenuBuilder::Helpers\n end\n end\n end\n end\nend","questionMetadata":{"type":"version","tag":"ruby","level":"intermediate"},"answer":"It is too early, autoloading of reloadable code is not yet ready in Rails 7.\n\n\nHowever, as it happens, that is catching a gotcha. This is why Rails 7 does not allow access to reloadable classes that early. Since the module is included in a non-reloadable class (`AC::Base`), it makes no sense for it to be reloadable, because reloads would have no effect for that included module.\n\n\nPlease, delete the custom `autoload_paths` configuration, and add the concerns and helpers directories of the engine to the `autoload_once_paths`. The non-reloadable classes and modules in that collection are available earlier."}
{"questionId":"0a2903ffbff1b8eba2b48ce8d18e02ef04dbbe9210e6ede18e9259b0e13928f0","question":"Test for object callability in match-case construct\nFor context, I have a function that matches keys in a dictionary to perform certain action; to match item keys, the function accepts either a sequence of keys to match, or a function that recognizes those keys.\n\n\nI'm wondering if I can use the `match-case` pattern for it. I try something like:\n\n\n\n```\ndef process_fields(dataset,function,fields):\n \"\"\"Apply function to selected values of a dictionary\n fields can be a list of keys whose values shall be processed, \n or a predicate that returns True for the targeted fields.\"\"\"\n \n match fields:\n case list() | set() | tuple():\n key_matcher = lambda x:x in fields\n case <what I'm looking for>:\n key_matcher=fields\n walk_items(dataset,key_matcher,function)\n\n\n```\n\nSo far I've tried:\n\n\n\n```\n case callable(function):\n key_matcher=function\n\n```\n\n\n```\n case typing.Callable(function):\n key_matcher=function\n\n```\n\nI can't find what I need in the official documentation. I'm I missing something or it's not doable?\n\n\nExample is here to avoid too dry abstraction. Note that I'm NOT looking for alternatives to solve that particular problem, I can perfectly do it myself; I'm looking to find out if it exist a way to use that python structure in particular.\n\n\n*Edit: Even though the function itself is not the focus of the post, I've added a docstring and simplified a bit to clarify the example.*","questionMetadata":{"type":"conceptual","tag":"python","level":"intermediate"},"answer":"If you want to test if an object is Callable then:\n\n\n\n```\nfrom collections.abc import Callable\n\ndef func():\n pass\n\nf = func\n\nmatch f:\n case Callable():\n print(\"Yes it's callable\")"}
{"questionId":"66c91c713da2e0b3fea0ab0a41f88f1fdf68029a0f2af36d1c5d67e64cf43c29","question":"Capturing all matches of a string value from an array of regex patterns, while prioritizing closest matches\nLet's say I have an array of names, along with a regex union of them:\n\n\n\n```\nmatch_array = [\/Dan\/i, \/Danny\/i, \/Daniel\/i]\nmatch_values = Regexp.union(match_array)\n\n```\n\nI'm using a regex union because the actual data set I'm working with contains strings that often have extraneous characters, whitespaces, and varied capitalization.\n\n\nI want to iterate over a series of strings to see if they match any of the values in this array. If I use `.scan`, only the first matching element is returned:\n\n\n\n```\n'dan'.scan(match_values) # => [\"dan\"]\n'danny'.scan(match_values) # => [\"dan\"]\n'daniel'.scan(match_values) # => [\"dan\"]\n'dannnniel'.scan(match_values) # => [\"dan\"]\n'dannyel'.scan(match_values) # => [\"dan\"]\n\n```\n\nI want to be able to capture all of the matches (which is why I thought to use `.scan` instead of `.match`), but I want to prioritize the closest\/most exact matches first. If none are found, then I'd want to default to the partial matches. So the results would look like this:\n\n\n\n```\n'dan'.scan(match_values) # => [\"dan\"]\n'danny'.scan(match_values) # => [\"danny\",\"dan\"]\n'daniel'.scan(match_values) # => [\"daniel\",\"dan\"]\n'dannnniel'.scan(match_values) # => [\"dan\"]\n'dannyel'.scan(match_values) # => [\"danny\",\"dan\"]\n\n```\n\nIs this possible?","questionMetadata":{"type":"implementation","tag":"ruby","level":"intermediate"},"answer":"You can do something like this:\n\n\n\n```\nmatch_array = [\/Dan\/i, \/Danny\/i, \/Daniel\/i]\n\nstrings=['dan','danny','daniel','dannnniel','dannyel']\n\np strings.\n map{|s| [s, match_array.filter{|m| s=~m}]}.to_h\n\n```\n\nPrints:\n\n\n\n```\n{\"dan\"=>[\/Dan\/i], \n \"danny\"=>[\/Dan\/i, \/Danny\/i], \n \"daniel\"=>[\/Dan\/i, \/Daniel\/i], \n \"dannnniel\"=>[\/Dan\/i], \n \"dannyel\"=>[\/Dan\/i, \/Danny\/i]}\n\n```\n\nAnd you can convert the regexes to strings of any case if desired:\n\n\n\n```\np strings.\n map{|s| [s, match_array.filter{|m| s=~m}.\n map{|r| r.source.downcase}]}.to_h\n\n```\n\nPrints:\n\n\n\n```\n{\"dan\"=>[\"dan\"], \n \"danny\"=>[\"dan\", \"danny\"], \n \"daniel\"=>[\"dan\", \"daniel\"], \n \"dannnniel\"=>[\"dan\"], \n \"dannyel\"=>[\"dan\", \"danny\"]}\n\n```\n\nThen if 'closest' is equivalent to 'longest' just sort by length of the regex source (ie, `Dan` in the regex `\/Dan\/i`):\n\n\n\n```\np strings.\n map{|s| [s, match_array.filter{|m| s=~m}.\n map{|r| r.source.downcase}.\n sort_by(&:length).reverse]}.to_h \n\n```\n\nPrints:\n\n\n\n```\n{\"dan\"=>[\"dan\"], \n \"danny\"=>[\"danny\", \"dan\"], \n \"daniel\"=>[\"daniel\", \"dan\"], \n \"dannnniel\"=>[\"dan\"], \n \"dannyel\"=>[\"danny\", \"dan\"]}\n\n```\n\nBut that only works with literal string matches. What would you expect with `\"dannnniel\"=~\/.*\/` which is a 'closer' match than `\"dannnniel\"=~\/Dan\/i`?\n\n\nSuppose by 'closest' you mean the longest substring returned by the regex match -- so something like `\/.*\/` is longer than any substring of the string to be matched. You can do:\n\n\n\n```\nmatch_array = [\/Dan\/i, \/Danny\/i, \/Daniel\/i, \/.{3}\/, \/.*\/]\n\nstrings=['dan','danny','daniel','dannnniel','dannyel']\n\np strings.\n map{|s| [s, match_array.filter{|m| s=~m}.\n sort_by{|m| s[m].length}.reverse]}.to_h\n\n```\n\nWhich now sorts on the length of the match vs the length of the regex:\n\n\n\n```\n{\"dan\"=>[\/.*\/, \/.{3}\/, \/Dan\/i], \n \"danny\"=>[\/.*\/, \/Danny\/i, \/.{3}\/, \/Dan\/i],\n \"daniel\"=>[\/.*\/, \/Daniel\/i, \/.{3}\/, \/Dan\/i], \n \"dannnniel\"=>[\/.*\/, \/.{3}\/, \/Dan\/i],\n \"dannyel\"=>[\/.*\/, \/Danny\/i, \/.{3}\/, \/Dan\/i]}"}
{"questionId":"b75b74be5afb39a3664a0a4a1ed48f5b8f9895ff0aac91fbb63ea9b8f0c0bf2b","question":"Count consecutive elements in Pandas list\nI have created the following pandas dataframe:\n\n\n\n```\nimport pandas as pd\nimport numpy as np\n\nds = {'col1':[1,\"S\",3,4,\"S\"], 'col2' : [6,\"S\",8,9,\"S\"],'col3' : [67,None,87,79,\"S\"]}\n\ndf = pd.DataFrame(data=ds)\n\ndf['col4']= df[['col1','col2','col3']].values.tolist()\n\n```\n\nThe dataframe looks like this:\n\n\n\n```\nprint(df)\n\n col1 col2 col3 col4\n0 1 6 67 [1, 6, 67]\n1 S S None [S, S, None]\n2 3 8 87 [3, 8, 87]\n3 4 9 79 [4, 9, 79]\n4 S S S [S, S, S]\n\n```\n\nFor each record, I need to calculate the number of consecutive \"S\" inside `col4`. The resulting dataframe would look like this:\n\n\n\n```\n col1 col2 col3 col4 iCount\n0 1 6 67 [1, 6, 67] 0\n1 S S None [S, S, None] 2\n2 3 8 87 [3, 8, 87] 0\n3 4 9 79 [4, 9, 79] 0\n4 S S S [S, S, S] 3\n\n```\n\nI have tried this code:\n\n\n\n```\ncol4 = np.array(df['col4'])\niCount = 0\nfor i in range(len(df)):\n for j in range(len(col4[i])):\n\n if(col4[i][j] == \"S\"):\n iCount += 1\n \n else:\n iCount = 0\n\ndf['iCount'] = iCount\n\n```\n\nBut I get the following dataframe:\n\n\n\n```\n col1 col2 col3 col4 iCount\n0 1 6 67 [1, 6, 67] 3\n1 S S None [S, S, None] 3\n2 3 8 87 [3, 8, 87] 3\n3 4 9 79 [4, 9, 79] 3\n4 S S S [S, S, S] 3\n\n```\n\nPlease can someone help me find the error?","questionMetadata":{"type":"debugging","tag":"python","level":"beginner"},"answer":"I would use [`itertools.groupby`](https:\/\/docs.python.org\/3\/library\/itertools.html#itertools.groupby):\n\n\n\n```\nfrom itertools import groupby\n\ndef consec(lst):\n return max((len(list(g)) for k,g in\n groupby(lst, lambda x: x=='S') if k), default=0)\n\ndf['iCount'] = df['col4'].map(consec)\n\n```\n\n*NB. using `max` here to get the longest sequence since there could be more than one stretch of S's, but you could use `min`\/`sum` or any other logic.*\n\n\nIf you are sure there is a maximum of **one** series of S per list, you could simplify to:\n\n\n\n```\ndf['iCount'] = [sum(x=='S' for x in lst) for lst in df['col4']]\n\n```\n\nOutput:\n\n\n\n```\n col1 col2 col3 col4 iCount\n0 1 6 67 [1, 6, 67] 0\n1 S S None [S, S, None] 2\n2 3 8 87 [3, 8, 87] 0\n3 4 9 79 [4, 9, 79] 0\n4 S S S [S, S, S] 3"}
{"questionId":"2d28af4c089b36b2658cf6638083a36379f2e0e841149e195cd983ca04647100","question":"Remove items from list starting with a list of prefixes\nI have a list of strings and a list of prefixes. I want to remove all elements from the list of strings that start with a prefix from the list of prefixes.\n\n\nI used a `for` loop, but why doesn't it seem to work?\n\n\n\n```\nlist_of_strings = ['test-1: foo', 'test-2: bar', 'test-3: cat']\nlist_of_prefixes = ['test1', 'test-2']\n\nfinal_list = []\nfor i in list_of_strings:\n for j in list_of_prefixes:\n if not i.startswith(j):\n final_list.append(i)\n \nprint(list(set(final_list)))\n\n```\n\nCurrently the output is\n\n\n\n```\n['test-3: cat', 'test-1: foo', 'test-2: bar']\n\n```\n\nThe output I want to get is\n\n\n\n```\nfinal_list = ['test-3: cat']","questionMetadata":{"type":"debugging","tag":"python","level":"beginner"},"answer":"Your approach doesn't work because you potentially perform an append for each element in `list_of_prefixes`, but if the string does start with one of the prefixes, it's guaranteed to *not* start with one of the others, so they all get added.\n\n\nWith list comprehensions, generator expressions, and `any`, this is very straightforward.\n\n\n\n```\n>>> list_of_strings = ['test-1: foo', 'test-2: bar', 'test-3: cat']\n>>> list_of_prefixes = ['test1', 'test-2']\n>>> filtered = [\n... s \n... for s in list_of_strings \n... if not any(s.startswith(p) for p in list_of_prefixes)\n... ]\n>>> filtered\n['test-1: foo', 'test-3: cat']\n\n```\n\nNote that `'test-1: foo'` does not start with `'test1'` or `'test-2'`. If you meant for the `list_of_prefixes` to include `'test-1'` then you would get the output you expect."}
{"questionId":"08fdbc279308b7617802f1a574ecb22015ff840bf1c661d19a69715d100c51ae","question":"Python 3.10.4 scikit-learn import hangs when executing via CPP\nPython 3.10.4 is embedded into cpp application.\nI'm trying to import sklearn library which is installed at custom location using pip --target.\n\n\nsklearn custom path (--target path) is appended to sys.path.\n\n\nBelow is a function from the script which just prints the version information.\n\n\nExecution using Command Line works well as shown below.\n\n\n\n```\npython3.10 -c 'from try_sklearn import *; createandload()'\n\n```\n\nOutput\n\n\n\n```\n[INFO ] [try_sklearn.py:23] 3.10.4 (main, Aug 4 2023, 01:24:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]\n[INFO ] [try_sklearn.py:24] sklearn \/users\/xxxx\/temp\/python\/scikit-learn\/sklearn\/__init__.py Version = 1.5.1\n\n```\n\nThe same script when called using CPP, **hangs** at\n\n\n\n```\nimport sklearn\n\n```\n\nOther libraries like pandas, numpy etc works without any issues.","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"<https:\/\/github.com\/scipy\/scipy\/issues\/21189>\n\n\nLooks like Scipy and Numpy do not support Embedded python"}
{"questionId":"14e8df1b34bc2effeb4a85fdf612a48c837b4ad6ecf686cf1743932e8bda87d1","question":"Minimal path on weighted tree query\nGiven a weighted tree with n vertices. there are q queries and for each query, you are given integers (u,k). find number of vertices v such that the smallest edge on the route from u to v is equal to k. (n,q <= 1e5)\n\n\ni tried using dfs but the best solution i could think is O(n\\*q)\n\n\nMy current code:\n\n\n\n```\n#include <bits\/stdc++.h>\n \nusing namespace std;\n\nconst int INF = 1e9;\n\nstruct Edge {\n int to;\n int weight;\n};\n \nvector<vector<Edge>> adj;\nvector<int> mn;\n\nvoid dfs(int u, int parent, int minWeight) {\n mn[u] = minWeight;\n for (auto edge : adj[u]) {\n if (edge.to != parent) {\n dfs(edge.to, u, min(minWeight, edge.weight));\n }\n }\n}\n \nint main() {\n int n, q;\n cin >> n >> q;\n adj.resize(n + 1);\n mn.resize(n + 1);\n for (int i = 0; i < n - 1; ++i) {\n int u, v, w;\n cin >> u >> v >> w;\n adj[u].push_back({v, w});\n adj[v].push_back({u, w});\n }\n while (q--) {\n int u, k;\n cin >> u >> k;\n fill(mn.begin(), mn.end(), INF);\n dfs(u, -1, INF);\n int cnt = 0;\n for (int v = 1; v <= n; ++v) {\n if (v != u && mn[v] == k) {\n cnt++;\n }\n }\n \n cout << cnt << endl;\n }\n \n return 0;\n}","questionMetadata":{"type":"implementation","tag":"c++","level":"intermediate"},"answer":"This can be solved offline by first reading all queries, then sorting them by edge weight in non-increasing order. We can use a disjoint set to maintain the forest formed by using only edges with weight greater than a certain value. We also sort the edges in the tree in non-increasing order and add edges of certain weights back in that order. Whenever we add edges back, we check for queries for that specific weight. The increase in component size for any node after adding these edges back is the number of vertices that have this edge weight as the minimum on the path. Note that queries for edge weights that do not exist in the tree always result in `0`.\n\n\nWe can use a modified version of the disjoint set such that the root of each component stores the negated size of the component, to make it easier to answer queries as well as implement union by size. The time complexity of this solution is `O(N log N + (N + Q) log Q + (N + Q)\u03b1(N))` (where `\u03b1` is the inverse Ackermann function and effectively constant here).\n\n\nThis can be solved online, but the code gets a lot more complicated.\n\n\n\n```\n#include <vector>\n#include <iostream>\n#include <map>\n#include <functional>\n#include <utility>\nstd::vector<int> ds; \/\/ the disjoint set\nint find(int u) {\n return ds[u] < 0 ? u : ds[u] = find(ds[u]);\n}\nint main() {\n int n, q;\n std::cin >> n >> q;\n std::vector<int> answers(q);\n ds.assign(n + 1, -1);\n std::map<int, std::vector<std::pair<int, int>>, std::greater<>> edgesForWeight, queriesForWeight;\n for (int i = 1, u, v, w; i < n; ++i) {\n std::cin >> u >> v >> w;\n edgesForWeight[w].push_back({u, v});\n }\n for (int i = 0, u, k; i < q; ++i) {\n std::cin >> u >> k;\n queriesForWeight[k].push_back({i, u});\n }\n for (const auto& [weight, edges] : edgesForWeight) {\n auto queriesIt = queriesForWeight.find(weight);\n if (queriesIt != queriesForWeight.end())\n for (auto [qidx, node] : queriesIt->second)\n answers[qidx] = ds[find(node)];\n for (auto [u, v] : edges) {\n u = find(u), v = find(v);\n if (ds[u] > ds[v]) std::swap(u, v);\n ds[u] += ds[v];\n ds[v] = u;\n }\n if (queriesIt != queriesForWeight.end())\n for (auto [qidx, node] : queriesIt->second)\n answers[qidx] -= ds[find(node)];\n }\n for (int ans : answers) std::cout << ans << '\\n';\n}"}
{"questionId":"ddc228ac14338d4af393ee0f68d5b71b332aec4d9955d8288997da56f09ac16c","question":"How do I update a d3 projection to match zoom transform?\nThis is my zoom handler for my map:\n\n\n\n```\nconst zoom = d3.zoom()\n .scaleExtent([1,25])\n .translateExtent([[width * -0.5, height * -0.5], [width * 1.5,height*1.5]])\n .on('zoom', (ev) => {\n svg.selectAll('path').attr('transform', ev.transform); \n })\n\n```\n\nIt updates the paths in the svg using the transform params from the event. This works great, but if I use `projection(point)` or similar methods to return the x,y coordinates of a point, then they will be incorrect.\n\n\nI realise I need to update my projection to update the zoom\/pan behaviour.\n\n\nIf I record the original map translation before any zooming, `const origTrans = projection.translate();` and then apply the x,y transforms then I am able to correctly sync the projection for the top zoom level (ie k=1).\n\n\n\n```\n.on(\"end\", (ev)=> {\n projection.translate([origTrans[0] + ev.transform.x * ev.transform.k, origTrans[1] + ev.transform.y * ev.transform.k]);\n const c = projection([-3.3632, 55]);\n svg.append(\"circle\")\n .attr(\"cx\", c[0])\n .attr(\"cy\", c[1])\n .attr(\"r\", 9)\n .attr(\"fill\", \"red\");\n }); \n\n```\n\nI'm unclear as how zoom level relates to the projection scale. I can't achieve the same thing\n\n\nI've tried a few things e.g. - `projection.scale(ev.transform.k)`, or `projection.scale(projection.scale() * ev.transform.k)` - I'm assuming there's a lot more to it? If it helps I am using geoMercator for the projection.","questionMetadata":{"type":"implementation","tag":"javascript","level":"intermediate"},"answer":"Rereading your question closer, you may be complicating the problem. The projection's scale and translate can be entirely independent from the SVG's zoom state.\n\n\nReferencing one from the other creates more problems than it's worth, partly because your number of dynamic coordinate systems increases, partly because you may need to do things like recalculate projected points continuously throughout drag events (depending on approach, which can be laggy).\n\n\n\n\n---\n\n\nMy understanding of the problem is: your SVG paths rescale but you need to extract, interact, update, or plot specific points and\/or non path elements on the SVG to reflect their new location.\n\n\nWhy not use the same approach for the circles\/points\/other elements as the paths? To do so I'd create a new `g` to hold all zoomable elements, paths and otherwise, apply the zoom transform on that, this way the zoom itself takes care of all scaling for you:\n\n\n\n```\nlet zoomG = svg.append('g');\n\nzoom.on('zoom', (ev) => {\n zoomG.attr('transform', ev.transform); \n })\n\n```\n\nAny coordinates of children inside the zoomG will be represented using projected pixel values from the projection. The zoomG is then transformed as a whole according to the zoom.\n\n\nFor example, the below plots some paths and a circle (London) to start. Regardless of zoom state Singapore will be plotted correctly on click anywhere on the map (it'll disappear after a few seconds until clicking again), while the existing features will be panned and zoomed correctly.\n\n\n\n\n\n```\nvar svg = d3.select(\"svg\")\n .attr(\"width\", 500)\n .attr(\"height\", 500)\n\nvar projection = d3.geoMercator()\n .scale(500 \/ 2 \/Math.PI )\n .translate([250,250])\n \nlet zoomG = svg.append('g');\n\nlet zoom = d3.zoom()\n .on(\"zoom\", (ev)=> zoomG.attr('transform', ev.transform))\n\n \nsvg.call(zoom);\n \nsvg.on(\"click\", function(ev) {\n let xy = d3.pointer(ev, zoomG.node());\n let longlat = projection.invert(xy);\n console.log(\"mouse click at: \" + xy + \" which represents: \" + longlat);\n \n zoomG.append(\"circle\")\n .attr(\"cx\", projection([103.820,1.352])[0])\n .attr(\"cy\", projection([103.820,1.352])[1])\n .attr(\"r\", 4)\n .transition()\n .attr(\"r\", 0)\n .duration(2000)\n .remove();\n \n})\n \n\nd3.json(\"https:\/\/raw.githubusercontent.com\/holtzy\/D3-graph-gallery\/master\/DATA\/world.geojson\").then( function(data){\n \n zoomG.selectAll(\"path\")\n .data(data.features)\n .enter().append(\"path\")\n .attr(\"fill\", \"#eee\")\n .attr(\"d\", d3.geoPath()\n .projection(projection)\n )\n .style(\"stroke\", \"#ccc\")\n \n zoomG.append(\"circle\")\n .attr(\"cx\", projection([0.128,51.507])[0])\n .attr(\"cy\", projection([0.128,51.507])[1])\n .attr(\"r\", 4)\n \n})\n```\n\n\n```\n<script src=\"https:\/\/cdnjs.cloudflare.com\/ajax\/libs\/d3\/7.8.5\/d3.min.js\"><\/script>\n<svg><\/svg>\n```\n\n\n\n\n\n\nIn the above I've also added a demonstration on how to calculate the geographic position of the mouse.\n\n\n\n```\nlet xy = d3.pointer(ev, zoomG.node());\nlet longlat = projection.invert(xy);\nconsole.log(\"mouse click at: \" + xy + \" which represents: \" + longlat);\n\n```\n\nWe don't need to worry about this when moving from projected coordinates to pixel coordinates as the nesting of the paths\/circles\/whatever in a parent G with the zoom transform takes care of this for us. But going the reverse direction, we need to consider the zoom transform in where the mouse actually is in projected coordinate space (which is where d3.pointer comes in)."}
{"questionId":"57b74e1b85d1f3fd9271bba7a382bf89894e7c43e4c6df7e1b5d765510010c5d","question":"Why is format() throwing ValueError: Unknown format code 'f' for object of type 'str' when I'm not inputting a string?\nI am using Python 2.7. (Switching to Python 3 for this particular code is not an option, please don't suggest it.) I am writing unit tests for some code.\n\n\nHere is the relevant piece of code:\n\n\n\n```\nclass SingleLineGrooveTable:\n VEFMT = '.3f'\n\n @classmethod\n def formatve(cls, value, error=None):\n er = 0\n if error is not None:\n er = error\n v = value\n elif len(value) > 1:\n v, er = value\n else:\n v = value\n return format(v, cls.VEFMT), format(er, cls.VEFMT)\n\n```\n\nand my test is:\n\n\n\n```\nimport unittest\n\nclass TestSingleLineGrooveTable(unittest.TestCase):\n \n def test_formatve_no_error(self):\n e_v = '3.142'\n e_er = '0.000'\n r_v, r_er = SingleLineGrooveTable.formatve([3.1423])\n self.assertEqual(e_v, r_v)\n self.assertEqual(e_er, r_er)\n\n```\n\n(Yes, I know it's funny I'm getting an error on the test with \"no\\_error\" in the name...)\n\n\nWhen I run the test, it throws `ValueError: Unknown format code 'f' for object of type 'str'` on the return statement for the function. But I can't figure out where it's getting a str from. Possibly relevant, this code and the code I have that uses it were copied pretty much wholesale from someone else's code (who I can no longer contact), so maybe I'm calling it in the wrong way, but still, that's a list, not a string!\n\n\nWhat is going on here? How do I fix this?","questionMetadata":{"type":"version","tag":"python","level":"intermediate"},"answer":"On Python 2, `object.__format__` effectively delegates to `format(str(self), format_spec)`. You can see the implementation [here](https:\/\/github.com\/python\/cpython\/blob\/v2.7.18\/Objects\/typeobject.c#L3592).\n\n\nSince `list` inherits `object.__format__`, your first `format` call is effectively calling `format(str([3.1423]), '.3f')`. That's why you get the error message you do.\n\n\nThis would still produce an error on Python 3. It'd just be a different error."}