The Rust Programming Language Forum's Avatar

The Rust Programming Language Forum

@users.rust-lang.org.web.brid.gy

General discussion of The Rust Programming Language [bridged from https://users.rust-lang.org/ on the web: https://fed.brid.gy/web/users.rust-lang.org ]

271 Followers  |  0 Following  |  16,585 Posts  |  Joined: 08.11.2024  |  1.6094

Latest posts by users.rust-lang.org.web.brid.gy on Bluesky

`Pin<Rc<T>>` vs `Rc<Pin<T>>` You don't need to use `Pin` at all. A given Rc always points to the same address. If you did need `Pin`, then the thing you would want is `Pin<Rc<T>>` because that means that the `T` is pinned (not the `Rc`). There's no such thing as `Rc<Pin<T>>` unless T itself is a pointer. But you dont need pinning at all; it's already guaranteed that the allocation will stay in one place.
12.06.2025 15:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
`Pin<Rc<T>>` vs `Rc<Pin<T>>` pufferfish101007: > Context: I've got a data structure (call it `Thing`) which is always passed around in an `Rc`, and `Thing` currently has an `id` field, because two `Thing`s with the same data are not necessarily the same thing (and may become different in the future because of interior mutability). So my implementation of `PartialEq` for `Thing` compares their `id`s; `Hash` also just uses the id. > > Generating and storing ids seems unnecessary though, seeing as the `Rc` provides an id for us in the form of its backing pointer. Going off the description and thinking ahead just a little bit, using an in-memory location for the `id` might not be the ideal way to go. For one: each and every `Thing` you create will only ever be able to reference _itself_ - by its _own_ reference. Any caching approach you might ever come up with, will have to be limited to the current allocation of `Thing`'s on the heap. The moment your program stops running, all of the `id`'s will need to be regenerated from the top at next launch. Assuming you're fine with all the above: pufferfish101007: > The actual question: If I want to guarantee that the pointer I obtain from an `Rc` will always point to the same data for the lifetime of that `Rc`, should I be using `Pin<Rc<Thing>>` or `Rc<Pin<Thing>>`? Both will work: the `Rc<Pin<Box<....>>>` is a bit clearer (especially if you're new to `Pin`) while the `Pin<Rc<...>>` will most likely have less of an impact on your code later on, performance-wise. As @Cerber-Ursi has pointed out: a `Pin` itself doesn't "pin" or do anything to the data. It's just a [compile-time] "marker" that imposes additional constraints on the use of a given pointer. > At its core, _pinning_ a value means _making the guarantee_ that the value's data will not be moved nor have its storage invalidated until it gets dropped. `[Pin::new_unchecked]` Therefore (click for more details) pufferfish101007: > Moreover, it seems to me that doing anything useful with a `Pin<Rc<T>>` is quite difficult because I don't think you can just access a `&Rc<T>` from a `Pin<Rc<T>>`, which you need for doing anything useful with an `Rc`. `Rc<T>` is (for all intents and purposes here) just a pointer to a heap allocation of `T`; with counters for the number of "strong" and "weak" references to that allocation given away on each `clone()`. An `&Rc<T>` is a pointer to _that_ pointer. A `Pin<Rc<T>>` is an `Rc<T>` with some guarantees on top. Both auto-`Deref` into `&T` themselves. To `clone()` the `Pin<Rc<T>>` is to `clone()` the `Rc<T>` associated with it automatically. What other "useful" thing would you require "doing" there? If you were thinking about getting your `id` from the `&Rc<T>` itself: that is most definitely one of the worst ways to go. Again: think of the `Rc<T>` as just another `&` reference. An `&Rc<T>` is a temporary stack-based ephemeral `&&T`. Whatever `Rc<T>`'s you are going to be using are going to be placed on the stack, `*`-ed into `T` and promptly discarded. And `id` you get from them will get invalidated pretty much as soon as you declare it. If anything, it's the `&T` you want here.
12.06.2025 15:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Can the compiler reorder code around my log with timestamps? I think it should be fine _for the most part_. Obviously if this were critical (e.g., people die if the timestamps are off), then this wouldn't be fine but you'd be dealing with _a lot_ more things as well (e.g., formally verifying your code is correct, using a real-time operating system, etc.). Even if you could somehow guarantee Rust won't reorder things, that doesn't guarantee the CPU won't. Also `Instant` may experience "time dilation" and even not account for system suspends, so I think this level of concern doesn't make sense if you can't also account for those other things especially since "those other things" may have a higher probability of happening and thus should be your first concerns (e.g., I think the possibility a system suspend causing issues is more likely). If you have actually deployed this and you start encountering issues, then it makes more sense to revisit this; although as stated above, you'll still have some difficult problems to solve.
12.06.2025 15:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Rust-analyzer dimming #[cfg(not(test))] Thanks, that did it.
12.06.2025 15:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why leaning C before Rust is not necessary Redglyph: > It may seem simple to us, The principle is simple. And more does someone interested in learning Rust not need. We can explain a lot more when we want -- how stack frames grow when functions are called, how the function local variables, function parameters, and other metadata like the return address are stored. Or how the OS manage the heap in detail -- I do not understand these O(1) heap allocators, or arena-allocators, myself. But I think all these details are not really necessary when someone is learning the Rust language. I agree, at some point Rust users should learn that, if they not already know it. But a quite detailed explanation should took not more than 3 pages in a book, and the readers should have no problem to understand it in 30 minutes. Otherwise, the explanation is bad, or too detailed for the reader. I think when we were introduced to Pascal at university, the professor talked about stack vs heap for 10 minutes only, and I do not think that us students missed something that time. The question is, if a Rust book should discuss stack vs. heap at all, or just refer to other resources, like Wikipedia? Well, sometimes Wikipedia can be difficult to understand, so having a 3 pages appendix in a book might be an option.
12.06.2025 14:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
`Pin<Rc<T>>` vs `Rc<Pin<T>>` pufferfish101007: > the former implies that the address of the `Rc` won't change, which isn't helpful because I'll be cloning `Rc`s anyway, whilst the latter implies that the location of the `Thing`, which will be the backing pointer of the `Rc`, won't change. That's not the correct interpretation though. `Pin` is always used as some kind of `Pin<Ptr<T>>`, where `Ptr<T>` is some kind of pointer to `T` that ensures that `T` stays at consistent location, as long as we don't have any `&mut Ptr<T>`'s. The whole logic is in the fact that getting `&mut Ptr<T>` from `Pin<Ptr<T>>` is `unsafe` (unless `Ptr<T>` is `Unpin` - in this case, `Pin` essentially does nothing at all). So the thing you want to use is probably `Pin<Rc<T>>`.
12.06.2025 13:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why leaning C before Rust is not necessary StefanSalewski: > That is really sad. It should took not more than 5 minutes. Both concepts are quite simple, and for using Rust you really need only a very basic understanding. It may seem simple to us, but if you don't have any insight on how a CPU works and haven't already studied a language where you must manage the memory a little (like C), perhaps it's not so simple without a good introductory book. You must understand that there's a CPU, which executes its instructions from a memory and has a limited number of registers, that one register points to the instruction to execute, that because of those limitations, it must use a stack to save the registers when it calls a subroutine and to store the local variables that don't fit in the available registers, and that data can also be stored on the heap. Depending on the type and the language, some data can or cannot be on the stack, or only partially, so sometimes it's only the pointer that's on the stack, etc. There are great books to explain all of this, but it must take a little while to wade through all those notions, even just to understand what a stack is. On the plus side, it'll be useful to better understand what's happening when a problem occurs, even in languages that don't expose everything.
12.06.2025 13:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why leaning C before Rust is not necessary Qoori: > Just understanding heap and stack took more than a week to get it right in my brain. That is really sad. It should took not more than 5 minutes. Both concepts are quite simple, and for using Rust you really need only a very basic understanding. Unfortunately some Rust books discuss the stack/heap stuff in much more detail then necessary, and some books sometimes makes wrong statements, i.e. that integers or arrays are stored always on the stack. This is obvious wrong, as for example integers or arrays can be the component type of a vector, and so live in the heap, as the vector data buffer is always located in the heap. Such errors can arise easily in a discussion -- I hope I used some more care in my own book. For the stack understanding -- some books do use the picture of a stack of plates in a diner or restaurant. That makes it quite obvious that one can only add or remove elements from the top. And for the heap -- it is just a large memory region, where we can requests storage blocks from the the operating system and return it when not any longer needed. In C explicitly with malloc() and free(), in Rust a bit more hidden. For owned strings and vectors, there is the combination of both: The vector is a struct, with 3 fields -- capacity, length, and a pointer to the actual data on the heap.
12.06.2025 13:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Transmuting `Box`es of `#[repr(transparent)]` slice-like DSTs Thanks. Your implementation of `wrap` is definitely much more straightforward. As for `TransparentWrapper`, `bytemuck` provides a derive macro for safely implementing it, so it's pretty convenient.
12.06.2025 12:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Macro_rules! for generating const-generic parameters You could start from here: macro_rules! comb { ( $( $($a:literal,)* $b:ty $(,$c:ty)* );+ ) => { comb!( $( $($a,)* false $(,$c)* );+ ; $( $($a,)* true $(,$c)* );+ ); }; ( $( $($a:literal),+ );+ ) => { [$( Self::route_beam_impl::<$($a),+>(), )+] }; } const FN: [RouteFn; 8] = comb!(bool, bool, bool); The `bool` type can be replaced by any other type; it's just there to differentiate the literal `true` or `false` from what hasn't been replaced yet. It will expand to: const FN: [RouteFn; 8] = [Self::route_beam_impl::<false, false, false>(), Self::route_beam_impl::<true, false, false>(), Self::route_beam_impl::<false, true, false>(), Self::route_beam_impl::<true, true, false>(), Self::route_beam_impl::<false, false, true>(), Self::route_beam_impl::<true, false, true>(), Self::route_beam_impl::<false, true, true>(), Self::route_beam_impl::<true, true, true>(), ]; You can replace by something else by changing the last pattern in the macro, but it must produce a valid output in the context it's used, which is why I had to use the turbofish syntax. The macro replaces a list of comma-separated mix or `bool` and `true`/`false`, those lists being separated by semicolons. When there are only `true`/`false` remaining, it goes to the 2nd pattern to create the array. It's a little tricky, to be honest, and that won't make your code very easy to read. PS: There's a limit to how much you can recurse into a macro, but the default limit is 128, so you should be safe.
12.06.2025 12:42 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Transmuting `Box`es of `#[repr(transparent)]` slice-like DSTs vdrn: > Assumming that `bytemuck::TransparentWrapper<Self>` is a `#[repr(transparent)]` wrapper of `Self`, is this function sound? `TransparentWrapper` is a trait, so the question itself doesn't make much sense. You probably want to be asking "assuming that `W` is a valid `TransparentWrapper<Self>`, is it sound?" The answer is: it depends. `TransparentWrapper` is an inherently unsafe trait and is more of a "marker" than anything else, to be used in place of a generic `#[repr(transparent)]`. Assuming the `W` type does indeed match the `Self` as per the layout: yes, it is going to be sound. You can create your own ad-hoc version, even: it will have the exact same purpose and all the same safety requirements with all the ways to shoot yourself in the foot: Summary (click for more details)
12.06.2025 12:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Code patterns for working with async in cross-platform wasm & native context Thanks, that looks super interesting.
12.06.2025 12:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
ABI of a ZST in function argument position? I don't think so: Rust Internals – 30 Aug 23 ### Creating 1-ZSTs guaranteed to have same extern "C" ABI as () language design Unsafe Code Guidelines Unless the target/OS defines what it means to pass a zero-sized function argument β€” and not doing so would not be a surprising oversight due to not existing in C/C++ and functionally useless at a machine level β€” Rust also defines what the behavior...
12.06.2025 11:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
ABI of a ZST in function argument position? Is it guaranteed that a ZST argument has no effect on a function's ABI? For example, are the following two functions guaranteed to have the same ABI? extern "C" { fn foo(a: u8, b: u16); fn bar(a: u8, x: (), b: u16); }
12.06.2025 11:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Can the compiler reorder code around my log with timestamps? Hi, I have a long-ish function that sometimes takes much longer than it should to execute. I've built myself a small logging type (see below) so I can narrow down where the problem lies. Essentially, I can call timed_log.log("now doing this and that"); throughout the function and that logs the message to a cache plus the timestamp at which it was called, and I can later print the log from that cache if the overall execution time was too long (thereby avoiding the unnecessary I/O load of saving the log for the case when performance is good). However, I do fear that the log might not be accurate, as I'm not sure whether the compiler is allowed to rearrange code around the calls to the logger. I've found this thread on StackOverflow where someone has that exact problem when benchmarking their code, and there's no definitive answer 2 years later. I'd rather not have to dig into the assembly trying to understand what the compiler did, since this is a multi-crate scenario and also I'm cross-compiling from amd64 to aarch64 and I'm unfamiliar with the tooling. Is there some sort of marker I can insert that tells rustc not to reorder instructions across it? Or is my fear unfounded, because there's some reason rustc doesn't actually reorder operations across my `Instant::elapsed` call? (If this decreases performance a bit, I can live with that.) I have found `std::sync::atomic::fence()`, but I cannot from the docs whether this is what I'm looking for. * * * use std::{fmt::{Display, Write}, time::{Duration, Instant}}; pub struct TimedLog<T> { t_start : Instant, log_entries : Vec<(Duration, T)> } impl<T> TimedLog<T> { pub fn new() -> Self { Self { t_start: Instant::now(), log_entries : vec![] } } pub fn log(&mut self, entry : impl Into<T>) { self.log_entries.push((self.t_start.elapsed(), entry.into())); } pub fn clear_entries(&mut self) { self.log_entries.clear(); } pub fn reset(&mut self) { self.log_entries.clear(); self.t_start = Instant::now(); } } impl<T> TimedLog<T> where T : Display { pub fn write<W>(&self, writer : &mut W) where W : Write { let mut t_old = None; for (t, msg) in self.log_entries.iter() { let dt = match t_old { None => Duration::new(0, 0), Some(t_old) => *t - t_old }; writeln!( writer, "t = {:11.6} ms | dt = {:11.6} ms | {}", t.as_secs_f32() * 1000., dt.as_secs_f32() * 1000., msg ); t_old = Some(*t); } } }
12.06.2025 11:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why leaning C before Rust is not necessary Answering the one who said that β€œit depends on what people study for” and β€œretired people study for fun”. Well actually I'm one of those people who learned low level languages. When I was about 12 years old I made a website from HTML, without css. It was just ui. Since then I've gotten to know languages like CSS and Python. Until I went to college, I never finished all of that. Then I was curious about Rust, the concept is really raw. Reading from The book and I don't like practice without material. Just understanding heap and stack took more than a week to get it right in my brain. I continue until now and fill personal blog articles with my learning process of rust, this is fun. I guess from my experience point of view, learning C is just a choice. But for Uni it seems to be part of the curriculum.
12.06.2025 11:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
`Pin<Rc<T>>` vs `Rc<Pin<T>>` Context: I've got a data structure (call it `Thing`) which is always passed around in an `Rc`, and `Thing` currently has an `id` field, because two `Thing`s with the same data are not necessarily the same thing (and may become different in the future because of interior mutability). So my implementation of `PartialEq` for `Thing` compares their `id`s; `Hash` also just uses the id. Generating and storing ids seems unnecessary though, seeing as the `Rc` provides an id for us in the form of its backing pointer. I was however a bit hesitant to use `Rc::as_ptr/ptr_eq` for these purposes though, because I wasn't certain that the pointer would never change. Posts in Does comment on Rc<T>.as_ptr() imply Rc<T> is always pinned? suggest that it'd be totally fine so long as I don't use `make_mut`; I don't use use `make_mut`, but I'd like to make certain that I don't shoot myself in the foot later on. The actual question: If I want to guarantee that the pointer I obtain from an `Rc` will always point to the same data for the lifetime of that `Rc`, should I be using `Pin<Rc<Thing>>` or `Rc<Pin<Thing>>`? To me, the former implies that the address of the `Rc` won't change, which isn't helpful because I'll be cloning `Rc`s anyway, whilst the latter implies that the location of the `Thing`, which will be the backing pointer of the `Rc`, won't change. Moreover, it seems to me that doing anything useful with a `Pin<Rc<T>>` is quite difficult because I don't think you can just access a `&Rc<T>` from a `Pin<Rc<T>>`, which you need for doing anything useful with an `Rc`. However, I do suspect that I don't understand `Pin` very well, having only read its documentation for the first time today. Thanks in advance for any clarity you can shed on the matter.
12.06.2025 11:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why does it require double casting when you use the regular borrow operator `&mut errhp_ctl`, you get a _reference_ to `errhp_ctl`, which needs to be coerced to the raw pointer with the `as` operator first, and then be casted to the target pointer type. when you use _raw_ borrow operator `&raw mut errhp_ctl`, you get a _raw pointer_ to `errhp_ctl` directly, so you only need to cast it once.
12.06.2025 11:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Panic handler in no_std for Aurix Β΅C The `PanicInfo` argument the panic handler receives can be used to format the panic message. You could format the panic message and then call a function you define in C to tell it what caused the panic. If you want control flow to return from `rust_fn`, you will have to implement full unwinding however. That and resetting the entire microcontroller are the only non-UB way to escape from the panic handler. Also note that panicking should only be used for programming bugs. It should not be used for regular error handling like you would use return codes in C or `Result` in Rust for.
12.06.2025 10:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Macro_rules! for generating const-generic parameters Hello, I am currently working on a performance critical part of a person project (long-range router plotter for Elite: Dangerous, using parallel beam-search across and implicit graph) and i want to do some compile-time branch-elimination (at the cost of code size). The signature for my worker functions looks like this: pub(crate) fn route_beam_impl< const REFUEL: bool, const SHIP: bool, const LIMIT: bool, >(&self, args: ArgTuple) -> _ And contains a few `if`s that disable parts of the code based on the const-generics parameters. My idea is to generate an array of functions pointers for every combination of parameters using a macro and then at runtime computing an index into the array and calling the corresponding function (that call is outside the hot path, so the indirection doesn't matter) but i'm struggling with the implementation since I have almost no experience writing macros. My idea was to have a macro invocation like `const FN: [RouteFn;8] = generate_static_dispatch_array(Self::route_beam_impl; REFUEL, SHIP, LIMIT)` which would then generate something like const FN: [RouteFn;8] = [ Self::route_beam_impl<false,false,false>, Self::route_beam_impl<false,false,true>, Self::route_beam_impl<false,true,false>, ... ]; But I'm not sure how to actually implement my idea. Best regards, Daniel
12.06.2025 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Transmuting `Box`es of `#[repr(transparent)]` slice-like DSTs Assumming that `bytemuck::TransparentWrapper<Self>` is a `#[repr(transparent)]` wrapper of `Self`, is this function sound? use std::mem; #[repr(C)] pub struct DstArray<H, D> { pub header: H, pub data: [D], } impl<H, D> DstArray<H, D> { pub fn wrap<W>(self: Box<Self>) -> Box<W> where W: bytemuck::TransparentWrapper<Self> +?Sized, { let self_box = mem::MaybeUninit::new(self); // - `W` is transparent wrapper of `Self` // - Ptr metadata of `&Self` is length, so the metadata of `&W` must be the same length. // // Does this imply that we can transmute // `MaybeUninit<Box<Self>>` to `MaybeUninit<Box<W>>`? let w_box: mem::MaybeUninit<Box<W>> = unsafe { mem::transmute_copy(&self_box) }; unsafe { w_box.assume_init() } } }
12.06.2025 09:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Panic handler in no_std for Aurix Β΅C Yes, there are way to handle the error in Rust like get() or do some checks to prevent it. But, what I need is, if it goes into panic, what are the ways to know the cause in embedded target, when it stays in loop{}.
12.06.2025 09:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why does it require double casting OK, that must be it. Thanks. So as I do not have a way to check it, basically having &raw makes it possible to get rid of the first casting/coersion, i.e. as *mut *mu _ Is that correct?
12.06.2025 09:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why doesn't rust warn about this missing match / typo? You have a typo in the third match arm 'SCROLL_EXIT' instead of 'SCROLL_OF_EXIT'. Because SCROLL_EXIT is not an existing symbol Rust assumes you are declaring it as a variable that can be used within the match arm. This will match any pattern, so the matches are exhaustive.
12.06.2025 09:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Panic handler in no_std for Aurix Β΅C if you want to handle the error, you should not use the array/slice index operator, but instead use fallible APIs, e.g. `slice.get()`. // the array of data let data = [0u8; 32]; // out-of-bound indexing will panic: let x = data[100]; // `get()` will return `None` for out-of-bound access: if let Some(y) = data.get(100) { }
12.06.2025 09:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Panic handler in no_std for Aurix Β΅C Hello, I am new to this rust language for embedded targets and I was trying out ways it can panic and return the information back to the file the function is called from (in my case it is .c file). What I am trying to achieve is, if you try to access an element in an array that doesn't exist, usually rust will panic and give error. But, if the index is being passed at runtime, then rust will compile the code, but will enter into panic and stay forever, when you flash the software. Is there a way to know, why it panicked or some error information that can be provided back to C and then may be reset the target? below is the rust code that I am trying to work with panics #[no_mangle] pub extern "C" fn rust_fn(idx: *const u8) -> u8 { let mut data: [u8; 32] = [0; 32]; data[0] = 42; // hex val - 2A data[7] = 26; // hex val - 1A if idx.is_null() { return E_PARAM; } let idx_deref: usize = unsafe { *idx as usize }; unsafe { core::ptr::write_volatile(&mut data[idx_deref], 64); // hex val - 40 } return E_OK; } #[no_mangle] pub extern "C" fn get_runtime_idx() -> usize { let mut n: usize = 0; for i in 0..5 { n += i; } return n+50; // value of n is 10 } #[panic_handler] fn panic(_info: &PanicInfo) -> ! { LAST_ERROR.store(E_PANIC_OOB, Ordering::Relaxed); loop {} // infinite loop to halt execution on panic } Below is the C code where these 2 functions are being called: uint8 idx = get_runtime_idx(); volatile int ret_code = rust_fn(&idx); Since the code goes into panic and never returns, I cannot read the ret_code or any error to know the cause. Any ideas or help into the topic will be really helpful and is greatly appreciated. Thanks
12.06.2025 08:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why does it require double casting somedude54: > raw is not found in this scope what's your compiler's version? the `&raw` syntax was stablized back in 1.82.
12.06.2025 08:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why does it require double casting nerditation: > `&raw mut errhp_ctl as *mut *mut c_void` Hey, I've tried to check this, but I'm getting error that raw is not found in this scope? Any idea as to why? &mut errhp_ctl as *mut *mut c_void?
12.06.2025 08:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Simple async HTTP server (for OAuth2) I'm looking for a very simple async HTTP server. Something like `tiny_http`, but async. Context: I'm using OAuth2. That means I have to spawn an HTTP server, which should only wait for the very first connection, read out the URL query parameters, respond with "Authorization succesfull", and then close the server. Right now I do it with `tiny_http`: fn catch_redirect() -> anyhow::Result<RedirectArgs> { // listen for request let server = Server::http(SOCKET_ADDRESS).map_err(anyhow::Error::from_boxed)?; let request = server.recv()?; // extract params let url = format!("http://localhost{}", request.url()); let args = extract_args(url); // respond let response = match &args { Ok(_) => Response::from_string( "Authorization successful! You can now close this tab and return to the app.", ), Err(err) => Response::from_string(format!("Error: {}", err)), }; request.respond(response)?; args } As you can see, it really only responds to the first request and then closes the server. (In this case by dropping it.) Now I want to build it pretty much exactly like this, but in async. That means, the only difference really should be that `server.recv()` would be replaced by something like `server.recv().await`. I can't just use `tokio::task::spawn_blocking(...)`, because I need to be able to _abort_ the task when the user cancels the OAuth2 process. That's the reason I need async. But unfortunately, I can't find a good crate for that. The go-to async HTTP server seems to be `hyper`, but it seems like it requires this `Service` architecture. A `Service` responds to _all_ incoming requests. That would make it unnecessarily complicated to realize my use case: I would need something like a oneshot channel or so and then abort the server from "externally". Instead I really just want something like `server.receive().await`, so that it doesn't even care about the second request. Any ideas? Thanks for your help!
12.06.2025 07:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Bon with Newtype Pattern Bon is impressively flexible, I'm a big fan. I bet you'd even be able to get `default_directives` to accept `[&'str]` so you could write: let c = config::Config::builder() .default_global_level("INFO") .default_directives(["a=ERROR"]) .build(); bon-rs.com ### with | Bon Next-gen compile-time-checked builder generator, named function's arguments, and more!
12.06.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0