let CodeEditorWidget = class CodeEditorWidget extends _base_common_lifecycle_js__WEBPACK_IMPORTED_MODULE_5__.Disposable {
static #_ = CodeEditorWidget_1 = this;
static #_2 = this.dropIntoEditorDecorationOptions = _common_model_textModel_js__WEBPACK_IMPORTED_MODULE_27__.ModelDecorationOptions.register({
description: 'workbench-dnd-target',
className: 'dnd-target'
}); //#endregion
get isSimpleWidget() {| TL;DR: to me it makes sense to go with the PR changes. Overall it performs better. | |
| Observations: | |
| 1. Performance changes depending on the benchmark function order. | |
| This tells me my workstation's CPU is being throttled after a while as the benchmark takes quite some time. | |
| 2. If I execute single cases independently (e.g., only 100, only 1000, only 15000, etc.) | |
| the TypedArray.set/slice is always more performant. The benchmark function order does not matter anymore. | |
| You can see this behavior in the "focused" benchmarks below. | |
| 3. The fillFrom variant actually got better from Node.js 20, it is much closer now. |
| import kotlinx.benchmark.* | |
| @State(Scope.Benchmark) | |
| @BenchmarkMode(Mode.Throughput) | |
| @Warmup(iterations = 4, time = 5, timeUnit = BenchmarkTimeUnit.SECONDS) | |
| @Measurement(iterations = 4, time = 5, timeUnit = BenchmarkTimeUnit.SECONDS) | |
| public class ByteArrayBenchmark { | |
| private val fromArray: ByteArray | |
| @Param("15", "100", "1000", "5000", "15000", "100000", "1000000") |
| import kotlinx.benchmark.* | |
| import kotlin.math.min | |
| @State(Scope.Benchmark) | |
| @BenchmarkMode(Mode.Throughput) | |
| @Warmup(iterations = 4, time = 5, timeUnit = BenchmarkTimeUnit.SECONDS) | |
| @Measurement(iterations = 4, time = 5, timeUnit = BenchmarkTimeUnit.SECONDS) | |
| public class CharArrayToStringBenchmark { | |
| private val array = CharArray(1000000) { | |
| when (it) { |
| fromArray.size = 50 | |
| Benchmark (newSize) Mode Cnt Score Error Units | |
| TypedArrayBenchmark.fillFrom 10 thrpt 4 19720.314 ± 1061.689 ops/ms | |
| TypedArrayBenchmark.fillFrom 30 thrpt 4 13720.991 ± 469.513 ops/ms | |
| TypedArrayBenchmark.fillFrom 45 thrpt 4 10231.173 ± 263.520 ops/ms | |
| TypedArrayBenchmark.fillFrom 60 thrpt 4 9780.742 ± 154.579 ops/ms | |
| TypedArrayBenchmark.fillFrom 80 thrpt 4 2063.709 ± 12.744 ops/ms | |
| TypedArrayBenchmark.typedArraySet 10 thrpt 4 17124.189 ± 223.028 ops/ms // slice | |
| TypedArrayBenchmark.typedArraySet 30 thrpt 4 16386.410 ± 1269.211 ops/ms // slice |
| @State(Scope.Benchmark) | |
| @BenchmarkMode(Mode.Throughput) | |
| @OutputTimeUnit(BenchmarkTimeUnit.MILLISECONDS) | |
| @Warmup(iterations = 4, time = 10, timeUnit = BenchmarkTimeUnit.SECONDS) | |
| @Measurement(iterations = 4, time = 10, timeUnit = BenchmarkTimeUnit.SECONDS) | |
| public class TypedArrayBenchmark { | |
| private var fromArray = ByteArray(0) | |
| @Param("100", "1000", "5000", "15000", "100000") | |
| public var newSize: Int = 0 |
This is a draft: feel free to comment and suggest improvements here or on Google Docs by 07.01.25 at 21:00 (UTC+1). It is intended to be published where others can support the idea.
Since Elon Musk's acquisition of Twitter over two years ago, the platform—now rebranded as X—has taken a troubling turn. The growing influence of its CEO in shaping political narratives and promoting specific ideologies has become increasingly apparent. As one of the world’s most visited social media platforms, with millions of active users and a significant influence on media outlets, it is alarming to see X transform from a potential
I could not find a proper, detailed (and up-to-date) reverse-engineerment
of Omegle's text chat protocol on the internet, so here, have one made by analyzing the web app (web requests and source code).
The responses are beautified and the query strings split up and URI-decoded for
readability.
Note that "query string" refers to parameters encoded into the URL and
"form data" to parameters in the POST body which do not have to be URI-encoded.
TODO:
Note: Nexus group repositories (good example in this StackOverflow question) are out of this tutorial's scope. In any case, deployment to group repositories is currently still an open issue for Nexus 3 (and not intended ever to be implemented in Nexus 2). Thus, it is assumed that we'll push & pull to/from the same repository, and ignore the idea of groups hereon in.
-
Ask your sysadmin for a username & password allowing you to log into your organistation's Nexus Repository Manager.
-
Test the login credentials on the Nexus Repository manager at: http://localhost:8081/nexus/#view-repositories (
localhostin our case is replaced by a static IP, and can only be connected to over VPN). If your organisation requires a VPN to connect to it, connect to that VPN before proceeding with this tutori
