Ofc, no problem.
Since this thread was initially about beginner friendly distros, I wanted to ensure I wasn’t going around recommending an inferior or problematic distro to new users as their first experience.
Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)
Ofc, no problem.
Since this thread was initially about beginner friendly distros, I wanted to ensure I wasn’t going around recommending an inferior or problematic distro to new users as their first experience.
Wayland and GPU stuff should be very good in endeavor, better than most systems I have seen, better than openSUSE leap and mint certainly. I don’t know fedora however.
Endeavor has its own base repo, but also the regular arch stuff like aur. The AUR is probably the best source for all those programs that are usually missing in your repo, and since the base stuff is stable in endeavor there is no problem if some random program needs a special version or a manual install sometimes, it won’t affect anything else.
The AUR is not the main package source for endeavor.
I don’t know your hardware, but the combination of up to date system components, endeavors focus on just working, and all the shit in the aur (to my understanding flatpak is currently quite useless for drivers) sound like it should just accept any hardware at least as well as other linux distros.
On a sidenote for flatpaks. There is this long running conflict between stability, portability, and security. The old-school package systems are designed to allow updating libraries systemwide, switching-in abi compatible replacements containing fixes. On the other hand, you have appimage, flatpak, …, which bring their own everything and will therefore keep running on old unsafe libraries sometimes for years before the developers of all those specific projects update their projects’ versions of all those libraries.
I see. I have heard a lot of mad things about Manjaro.
In my experience Endeavor is great for less experienced users, and doesn’t really have anything to do with Manjaro.
I’d recommend you give it a try
I think our mistake here was not being alcoholics
Ullr Nordic Libation Peppermint Cinnamon Schnapps Liqueur
Apparently a tool to transport serial connections over the internet, to allow you to run programs making use of them on a separate machine to the one(s) you plugged the serial into.
What is your take on endeavour?
The new page has a clear section for north korea, and lists wars newer than the ugandan bush war for it.
To me it seems more like someone noticed the original page was severely behind and decided to therefore merge it into the korea article, since apparently noone was maintaining it otherwise.
The offline version is on izzyondroid.
The design is worse, yes.
I don’t think it matters much because most of the time you only see the autofill thing, not the app.
When you do go to the app, it is to select between multiple credentials, which is still a split second action.
On mobile I have my 2fa in a different more convenient app (aegis), though k2a does allow to copy 2fa codes
It doesn’t.
Both DX and K2A-O open a local keepass file.
They are capable of reloading the file when it is changed, and can be set to immediately write out changes to the file.
Then you take whichever file sync tool you like and sync it with all other devices using it. As long as the sync tool can sync files in your internal storage, it will work.
I use syncthing, with a dedicated keepass folder containing only the database file. Then I simply add all my devices to the share and it’ll sync any changes to all other devices. I also have version history enabled for the share.
Keepass2Android Offline also works very well. It has a somewhat different feature set compared to DX.
I found it to be more stable at remaining permanentl unlocked, and DX dropped the 3rd domain level for password matching on either websites or apps, I don’t remember.
On the other hand DX works better for adding new credentials or making changes. Since I usually do that on desktop it doesn’t matter much for me.
They were doing the same on other repos for months.
Both their npm module and android client.
On android they tried to get people to add their own fdroid repo because the official fdroid has not had updates for 3 months due to the license changes.
Edit: Looking at it now compared to 4 days ago, they apparently got frdoid to remove bitwarden entirely from the repo. To me this looks like they are sweeping it under the rug, hiding the change pretending it has always been on their own repo they control.
Next time they try this the mobile app won’t run into issues, the exact issues that this time raised awareness and caused the outcry on the desktop app, which similarly is present in repos with license requirements.
If they were giving up on their plan, wouldn’t they “fix” the android license issue and resume updating fdroid, instead of burning all bridges and dropping it from the repo entirely, still pushing their own ustom repo? Where is the npm license revert?
Also important to note is that they are creating the same license problems in other places.
They broke f-droid builds 3 months ago and try to navigate users to their own repo now. Their own repo ofc not applying foss requirements, because the android app is no longer foss as of 3 months ago. Now the f-droid version is slowly going out of date, which creates a nice security risk for no reason other than their greed.
Apparently they also closed-sourced their “convenient” npm Bitwarden module 2 months ago, using some hard to follow reference to a license file. Previously it was marked GPL3.
It means previous versions remain open, but ownership trumps any license restrictions.
They don’t license the code to themselves, they just have it. And if they want to close source it they can.
GPLv3 and copyleft only work to protect against non-owners doing that. CLA means a project is not strongly open source, the company doing that CLA can rugpull at any time.
The fact a project even has a CLA should be extremely suspect, because this is exactly what you would use that for. To ensure you can harvest contributions and none of those contributers will stand in your way when you later burn the bridges and enshittify.
You could just copy others signed codes, so you would also need some sort of totp system.
Then you could still place some camera capturing and streaming plates of parked cars in real time, so you’d either need 2 way communication with the license plates, where the cameraa tell them to show a code for some specific nonce, and which you could then potentially still stream so would also need severe latency checks, or you would have to get way more reliable gps and make that part of the totp.
which also references an effort to use the media to quietly disseminate Google’s point of view about unionized tech workplaces.
Bogas’ order references an effort by Google executives, including corporate counsel Christina Latta, to “find a ‘respected voice to publish an op-ed outlining what a unionized tech workplace would look like,” and urging employees of Facebook, Microsoft, Amazon, and Google not to unionize.
in an internal message Google human resources director Kara Silverstein told Latta that she liked the idea, “but that it should be done so that there ‘would be no fingerprints and not Google specific.’”
From the article posted by 100_kg_90_de_belin.
Google seemingly does care about their internal image, so they will only make their actions obvious when they fire you for bogus reasons after wanting to join a union.
Quite nasty in that they give you no hints about how extreme their efforts on this are. They monitor internal employee tools like they are cosplaying the NSA, but you wouldn’t know before you are fired out of the blue.
All orca are dolphins but not all dolphins are orca
Yeah, for amateurs it’ll be a while longer for this tech to become easily available.
Though It is also fundamentally fixable, you can take the output of your sensor and apply the same sort of logic to it as professional large telescopes. The blocking spots will be larger since the telescope will not correct for atmospheric distortions and likely be in a less favorable spot, but still you can do far better than throwing out entire frames or even entire exposures.
It is ofc a much much larger ask for hobby astronomers to deal with this initial wild-west software mess of figuring all of that out.
As for the RF mess, this is the first time I hear of that. It seems honestly kinda odd to me, we have a lot of frequency control regulations globally and I have heard SpaceX go through the usual frequency allocation proceedings. A violation of that would be easy to show and should get them in serious trouble quickly. Do you have any source on that?
Maybe to add a bit of general context to this, I am not an astronomer but I work in an adjacent field. So I hear a lot of astronomers talk about their work both in private and public.
You don’t really hear them talk about satellites often. What from what I gather really wrecks astronomy is light pollution, which has been doubling every few years for a while now and is basically caging optical astronomy to a select few areas.
The worst thing for astronomy in the last century has probably, ironically, been the invention of the LED.
The satellite streak thing is probably a minor point, where newspapers caught some justified ranting of astronomers and blew it way out of proportion.
Wrecking is not really the right term.
It is causing work for astronomers, and wrecking very few older systems, but generally it is an issue you can work around. I.e. something temporary. What you usually see in my experience of the field is you have some of your work degraded by satellite streaks, which are about 2x more common since starlink, and you understandably complain at starlink. And then get around to coding up a solution to deal with the streaks, spend another few runs until it about works, and eventually forget this was ever a thing.
In more detail, the base issue is, that you are taking an image, with probably minutes or hours or days of exposure, and every satellite passing through that image is going to create a streak that does not represent a star. Naturally that is not good in most cases.
The classic approach here, because this issue has existed since before starlink satellites, is to - depending on frequency and exposure length and your methodology - either retake the entire shot, or throw out at least the frames with the satellite on it, manually.
The updated approach is to use info about satellite positions to automatically block out the very small angle of the sky around them that their light can be scattered to by the atmosphere, and remove this before summing that frame into your final exposure. Depending on methodology, it might also be feasible to automatically throw away frames with any satellite on them, or you can count up which parts of the image were blocked for how long in total and append a tiny bit of exposure only to them at the end.
To complicate this, I think more modern complaints are not about the permanent constellation satellites but those freshly deployed, that are still raising their orbits. Simply because their positions are not as easy to determine, since their orbits are changing. So you need to further adapt your system to specifically detect these chains of satellites and also block them out of your exposures.
The issue here is that you need to create this system that deals with satellite data. And then you need that control over the frames in your exposure, which naturally does not match how exposure used to work in the olden days of film, but to my knowledge does work on all “modern” telescopes.
My knowledge here is limited but I think this covers about 30-40 years of optical telescopes, which should largely be all optical ground based telescopes relevant today. Further, you probably do need to replace electronics in older telescopes, since they were not built to allow this selective blocking, only to interrupt the exposure.
In summary, not affected are narrow fov modern optical telescopes, and in general telescopes operating far from visual frequencies.
Affected with some extra work, would be some older narrow (but not very narrow) fov telescopes, as you now have to make them dodge satellites or turn off shortly, when previously you could have just thrown away the entire exposure in the rarer cases you caught a satellite. This would be software only (not that software is free).
Modern wide fov telescopes might need hardware upgrades or just software upgrades to recover frames with streaks on them.
Old wide fov telescopes may be taken out of commission or at least have their effective observation times cut shorter by needing to pass out on more and more exposure time over satellites in the frame.
It is a problem, yes, but in my understanding one that can be overcome, and is causing the main annoyance and majority of its issues while the number of satellites is increasing, not after they have been increased.
I don’t know of a single area of ground based astronomy that couldn’t be done with even a million satellites in leo.
What for? What did they do?