• 0 Posts
  • 181 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • Arch normally immediately updates to the latest version of every program

    This is not true though. Arch packages new program versions as soon as they can - for popular stuff this happens quickly but not everything updates quickly. And when they do publish a new package it goes to the testing repo for a short time before being promoted to the stable repos. If there is a problem with the package that they notice it will be held back until it can be solved. There is not a huge amount of testing that is done here as that is very time consuming and Arch do not have enough man power for this. But they also do not release much broken things at all. I have seen other distros like ubuntu cause far more havoc with a broken update then Arch ever has.


  • Avoid clone() options _

    I don’t really like that as general advice. A lot of the time a clone is perfectly valid and fine thing to do. More often then not I will read for a clone rather then an Rc or Arc. Its fine, you dont need to be afraid of it. And it misses the more important advice - avoid allocating in tight loops.

    There are lots of ways you can allocate data. Clone being only one and not even all clones will allocate data. So it is a poor thing to get hung up on. If you have an Rc or Arc then clones are cheap. Stack only data is also cheap to clone (and is often copy). Some structs internally use Arc or Rc or are just simple wrappers around copyable types. And it misses other forms of allocations, creating Strings or Vecs, boxing data etc. All of these things including cloning are fine most of the time. But should be avoided in tight loops and performance sensitive parts. And when learning it quite often does not matter that much to avoid them at all.

    I have seen quite a few people make things way harder for themselves by trying to avoid clone at all costs in all situations and IMO articles like this add to that as they never explain the main nuances of allocations and when you want to avoid them or when they are actually fine to use.




  • The known unknowns and especially the unknown unknowns never get factored into an estimate. People only ever think about the happy path, if everything goes right. But that rarely every happens so estimates are always widely off.

    The book How Big Things Get Done describes a much better way to factor in everything without knowing all the unknowns though - Just look a previous similar projects and look how long they took, take the average and bounds then adjust up or down if you have good reason to do so. Your project will very likely take a similar amount of time if your samples are similar in nature to your current task. And the actual time already factors in all the issues and problems encountered and even if you don’t hit all the same issues your problems will likely take a similar amount of time. And the more previous examples you have the better these estimates get.

    But instead of that we just pluck numbers out of the air and wonder why we never hit them.






  • but you think those same users will be totally interested in spending hours writing Perl or JSON configs and memorizing dozens of keyboard shortcuts for every function they used to use the mouse for??

    Of course not. This is the argument for a tiling desktop environment. The only reason people need to do all that ATM is because of the current tiling window managers. Not because tiling window management is inherently complex to understand. You can have a tiling window manager with a GUI configuration and that better supports the mouse while still supporting keyboard shortcuts. Then users can incrementally learn the shortcuts - like they do with floating window managers - to gain productivity in their day to day tasks.

    They might not be for everyone, but giving everyone the choice is also not a bad thing. Most people I have seen that try a tiling window manager do end up liking it and quite a few hate to go back to floating ones. But not all of them can be bothered with the amount you need to configure the current ones.

    So what is wrong with trying to make a easier to configure, use and generally a batteries included tiling desktop environment? This is essentially what it looks like Cosmic are doing - they support both floating and proper tiling without needing complex configuration or needing to learn loads of shortcuts.


  • IMO the tiling support in KDE and with gnome extensions does not look great. It cannot replace someones workflow that has been on a true tiling window manager. It is a benefit to those that have been using floating window managers for their whole life but I cannot now go back to them. Cosmic is the first desktop environment that looks like it has true tiling support (that can rival a tiling window manger) and not just drag a window to a side/area of the screen. Though I have yet to really try it out.


  • I disagree. What is wrong with a fully featured batteries included desktop environment that has proper tiling support (not just partital drag the window to the edge of the screen support). Lower the barrior to entry so that more people can make use of this powerful way of working. The main reason that tiling is considered hardcore is becuase it has mostly only been available on minimal configure them yourself window managers. But tiling does not have to be for the fully DIY only crowed.

    IMO the basic tiling support on gnome or KDE are not good enough. So I am forced to use something minimal but TBH I am sick of needing 100s of lines of config to get a basic environment setup. Cosmic seems like it will be a good answer to this post as its tiling support looks far more fully baked than other full desktop environments and hopefully we will see more people wanting to try out tiling once it reaches a more stable point.



  • This is an absolute terrible post :/ I cannot believe he thinks that is a good argument at all. It basically boils down to:

    Here is a new feature modern languages are starting to adopt.

    You might thing that is a good thing. Lists various reasonable reasons it might be a good thing.

    The question is: Whose job is it to manage that risk? Is it the language’s job? Or is it the programmer’s job?

    And then moves on to the next thing in the same pattern. He lists loads of reasonable reasons you might want the feature gives no reasons you would not want it and but says everything in a way to lead you into thinking you are wrong to think you want these new features while his only true arguments are why you do want them…

    It makes no sense.


  • But no one actually pulls that rule through, do they?

    They do though. Loads of new people to programming read that book and create unreadable messes of a code base that follow all of his advice. I have lost count of the number of times I have inlined functions, removed layers of abstraction and generally duplicated code to get a actual understanding of what is going on only to realize there is a vastly simpler way to structure the code that I could not see until all the layers and indirection are removed. Then to refactor again to remove redundant code and apply more useful layers again that actually made sense.

    And that is the problem we have with his book. People that need it take up as many bad habits as they do good ones leading to an overall decline in their code quality. It is not until years of experience that you can understand the bad bits and ignore them. So overall his book is a net negative on the programming world. Not all his advice is bad, but if you can tell that then you likely don’t need his advice.

    But on the layers of abstractions specifically, he takes this too far. Largely because of the 4 line limit he has. There is a good level of abstraction and I generally find more than 2 or 3 levels of abstraction is where I start to loose any sense of what is going on. He always seems to jump on abstraction as soon as he can, but I find waiting a while and abstraction when you need to to lead to fewer and vastly better layers of abstraction overall.

    And adding more abstraction does not help the people of people doing too many things inside a function - they just move it to sub functions rather than extracting the behavior for the caller to deal with. I have never seen him give advice on what that is appropriate, only keeps the functionality of the original function the same and move the logic into a nested function instead and that only covers up the issue of the function doing too much.



  • I kinda disagree with him on this point. I wouldn’t necessarily limit to one thing, but I think functions should preferably be minimal.

    I do actually agree with him on that point - functions should do one thing. Though I generally disagree on what one thing is. It is a useless vague term and he tends to lean on the smallest possible thing a thing can be. I tend to lean on larger ideas - a function should do one thing, even if that one thing needs 100s of lines to do. Where the line of what one thing is, is a very hard hard idea to define though.

    IMO a better metric is that code that changes together should live together. No jumping around lots of functions or files when you need to change something. And split things out when the idea of what they do can be isolated and abstracted away without taking away from the meaning of what the original function was doing. Rather than trying to split everything up in to 1-3 line functions - that is terrible advice.