Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)

  • 0 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • Wayland and GPU stuff should be very good in endeavor, better than most systems I have seen, better than openSUSE leap and mint certainly. I don’t know fedora however.

    Endeavor has its own base repo, but also the regular arch stuff like aur. The AUR is probably the best source for all those programs that are usually missing in your repo, and since the base stuff is stable in endeavor there is no problem if some random program needs a special version or a manual install sometimes, it won’t affect anything else.
    The AUR is not the main package source for endeavor.
    I don’t know your hardware, but the combination of up to date system components, endeavors focus on just working, and all the shit in the aur (to my understanding flatpak is currently quite useless for drivers) sound like it should just accept any hardware at least as well as other linux distros.

    On a sidenote for flatpaks. There is this long running conflict between stability, portability, and security. The old-school package systems are designed to allow updating libraries systemwide, switching-in abi compatible replacements containing fixes. On the other hand, you have appimage, flatpak, …, which bring their own everything and will therefore keep running on old unsafe libraries sometimes for years before the developers of all those specific projects update their projects’ versions of all those libraries.










  • They were doing the same on other repos for months.
    Both their npm module and android client.
    On android they tried to get people to add their own fdroid repo because the official fdroid has not had updates for 3 months due to the license changes.

    Edit: Looking at it now compared to 4 days ago, they apparently got frdoid to remove bitwarden entirely from the repo. To me this looks like they are sweeping it under the rug, hiding the change pretending it has always been on their own repo they control.

    Next time they try this the mobile app won’t run into issues, the exact issues that this time raised awareness and caused the outcry on the desktop app, which similarly is present in repos with license requirements.

    If they were giving up on their plan, wouldn’t they “fix” the android license issue and resume updating fdroid, instead of burning all bridges and dropping it from the repo entirely, still pushing their own ustom repo? Where is the npm license revert?



  • It means previous versions remain open, but ownership trumps any license restrictions.
    They don’t license the code to themselves, they just have it. And if they want to close source it they can.

    GPLv3 and copyleft only work to protect against non-owners doing that. CLA means a project is not strongly open source, the company doing that CLA can rugpull at any time.

    The fact a project even has a CLA should be extremely suspect, because this is exactly what you would use that for. To ensure you can harvest contributions and none of those contributers will stand in your way when you later burn the bridges and enshittify.



  • which also references an effort to use the media to quietly disseminate Google’s point of view about unionized tech workplaces.

    Bogas’ order references an effort by Google executives, including corporate counsel Christina Latta, to “find a ‘respected voice to publish an op-ed outlining what a unionized tech workplace would look like,” and urging employees of Facebook, Microsoft, Amazon, and Google not to unionize.

    in an internal message Google human resources director Kara Silverstein told Latta that she liked the idea, “but that it should be done so that there ‘would be no fingerprints and not Google specific.’”

    From the article posted by 100_kg_90_de_belin.

    Google seemingly does care about their internal image, so they will only make their actions obvious when they fire you for bogus reasons after wanting to join a union.
    Quite nasty in that they give you no hints about how extreme their efforts on this are. They monitor internal employee tools like they are cosplaying the NSA, but you wouldn’t know before you are fired out of the blue.



  • Yeah, for amateurs it’ll be a while longer for this tech to become easily available.
    Though It is also fundamentally fixable, you can take the output of your sensor and apply the same sort of logic to it as professional large telescopes. The blocking spots will be larger since the telescope will not correct for atmospheric distortions and likely be in a less favorable spot, but still you can do far better than throwing out entire frames or even entire exposures.
    It is ofc a much much larger ask for hobby astronomers to deal with this initial wild-west software mess of figuring all of that out.

    As for the RF mess, this is the first time I hear of that. It seems honestly kinda odd to me, we have a lot of frequency control regulations globally and I have heard SpaceX go through the usual frequency allocation proceedings. A violation of that would be easy to show and should get them in serious trouble quickly. Do you have any source on that?


  • Maybe to add a bit of general context to this, I am not an astronomer but I work in an adjacent field. So I hear a lot of astronomers talk about their work both in private and public.
    You don’t really hear them talk about satellites often. What from what I gather really wrecks astronomy is light pollution, which has been doubling every few years for a while now and is basically caging optical astronomy to a select few areas.

    The worst thing for astronomy in the last century has probably, ironically, been the invention of the LED.

    The satellite streak thing is probably a minor point, where newspapers caught some justified ranting of astronomers and blew it way out of proportion.


  • Wrecking is not really the right term.
    It is causing work for astronomers, and wrecking very few older systems, but generally it is an issue you can work around. I.e. something temporary. What you usually see in my experience of the field is you have some of your work degraded by satellite streaks, which are about 2x more common since starlink, and you understandably complain at starlink. And then get around to coding up a solution to deal with the streaks, spend another few runs until it about works, and eventually forget this was ever a thing.

    In more detail, the base issue is, that you are taking an image, with probably minutes or hours or days of exposure, and every satellite passing through that image is going to create a streak that does not represent a star. Naturally that is not good in most cases.
    The classic approach here, because this issue has existed since before starlink satellites, is to - depending on frequency and exposure length and your methodology - either retake the entire shot, or throw out at least the frames with the satellite on it, manually.

    The updated approach is to use info about satellite positions to automatically block out the very small angle of the sky around them that their light can be scattered to by the atmosphere, and remove this before summing that frame into your final exposure. Depending on methodology, it might also be feasible to automatically throw away frames with any satellite on them, or you can count up which parts of the image were blocked for how long in total and append a tiny bit of exposure only to them at the end.

    To complicate this, I think more modern complaints are not about the permanent constellation satellites but those freshly deployed, that are still raising their orbits. Simply because their positions are not as easy to determine, since their orbits are changing. So you need to further adapt your system to specifically detect these chains of satellites and also block them out of your exposures.

    The issue here is that you need to create this system that deals with satellite data. And then you need that control over the frames in your exposure, which naturally does not match how exposure used to work in the olden days of film, but to my knowledge does work on all “modern” telescopes.
    My knowledge here is limited but I think this covers about 30-40 years of optical telescopes, which should largely be all optical ground based telescopes relevant today. Further, you probably do need to replace electronics in older telescopes, since they were not built to allow this selective blocking, only to interrupt the exposure.

    In summary, not affected are narrow fov modern optical telescopes, and in general telescopes operating far from visual frequencies.
    Affected with some extra work, would be some older narrow (but not very narrow) fov telescopes, as you now have to make them dodge satellites or turn off shortly, when previously you could have just thrown away the entire exposure in the rarer cases you caught a satellite. This would be software only (not that software is free).
    Modern wide fov telescopes might need hardware upgrades or just software upgrades to recover frames with streaks on them.
    Old wide fov telescopes may be taken out of commission or at least have their effective observation times cut shorter by needing to pass out on more and more exposure time over satellites in the frame.

    It is a problem, yes, but in my understanding one that can be overcome, and is causing the main annoyance and majority of its issues while the number of satellites is increasing, not after they have been increased.
    I don’t know of a single area of ground based astronomy that couldn’t be done with even a million satellites in leo.