TechNom (nobody)

  • 0 Posts
  • 76 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle


  • While I understand your point, there’s a mistake that I see far too often in the industry. Using Relational DBs where the data model is better suited to other sorts of DBs. For example, JSON documents are better stored in document DBs like mongo. I realize that your use case doesn’t involve querying json - in which it can be simply stored as text. Similar mistakes are made for time series data, key-value data and directory type data.

    I’m not particularly angry at such (ab)uses of RDB. But you’ll probably get better results with NoSQL DBs. Even in cases that involve multiple data models, you could combine multiple DB software to achieve the best results. Or even better, there are adaptors for RDBMS that make it behave like different types at the same time. For example, ferretdb makes it behave like mongodb, postgis for geographic db, etc.







  • Python decided to use a single convention (semantic whitespace) instead of two separate ones for machine decodeable scoping and manual/visual scoping. That’s part of Python’s design principle. The program should behave exactly like what people expect it to (without strenuous reasoning exercises).

    But some people treat it as the original sin. Not surprised though. I’ve seen developers and engineers nurture weird irrational hatred towards all sorts of conventions. It’s like a phobia.

    Similar views about yaml. It may not be the most elegant - it had to be the superset of JSON, after all. But Yaml is a semi-configuration language while JSON is a pure serialization language. Try writing a kubernetes manifest or a compose file in pure JSON without whitespace alignment or comments (which pure JSON doesn’t support anyway). Let’s see how pleasant you find it.



  • I looked at the post again and they do talk about recursion for looping (my other reply talks about map over an iterator). Languages that use recursion for looping (like scheme) use an optimization trick called ‘Tail Call Optimization’ (TCO). The idea is that if the last operation in a function is a recursive call (call to itself), you can skip all the complexities of a regular function call - like pushing variables to the stack and creating a new stack frame. This way, recursion becomes as performant as iteration and avoids problems like stack overflow.


  • They aren’t talking about using recursion instead of loops. They are talking about the map method for iterators. For each element yielded by the iterator, map applies a specified function/closure and collects the results in a new iterator (usually a list). This is a functional programming pattern that’s common in many languages including Python and Rust.

    This pattern has no risk of stack overflow since each invocation of the function is completed before the next invocation. The construct does expand to some sort of loop during execution. The only possible overhead is a single function call within the loop (whereas you could have written it as the loop body). However, that won’t be a problem if the compiler can inline the function.

    The fact that this is functional programming creates additional avenues to optimize the program. For example, a chain of maps (or other iterator adaptors) can be intelligently combined into a single loop. In practice, this pattern is as fast as hand written loops.


  • Uuuh, am I no true Scotsman?

    That’s a terrible and disingenuous take. I’m saying that you won’t understand why it’s useful till you’ve used it. Spinning that as no true Scotsman fallacy is just indicative of that ignorance.

    You keep iterating the same arguments as the rest here, and I still adhere to my statement above: hardly anybody needs those tools.

    And you keep repeating that falsehood. Isn’t that the real no true Scotsman fallacy? How do you even pretend to know that nobody needs it? You can’t talk for everyone else. Those who use it find it useful in several other ways that I and others have explained. You can’t just judge it away from your position of ignorance.




  • You can have both. I’ll get to that later. But first, let me explain why edited history is useful.

    Unedited histories are very chaotic and often contains errors, commits with partial features, abandoned code, reverted code, out-of-sequence code, etc. These are useful in preserving the actual progress of your own thought. But such histories are a nightmare to review. Commits should be complete (a single commit contains a full feature) and in proper order. If you’re a reviewer, you also wouldn’t want to waste time reviewing someone else’s mistakes, experiments, reverted code, etc. Self-complete commits also have another advantage - users can choose to omit an entire feature by omitting a commit.

    Now the part about having both - the unedited and carefully crafted history. Rebasing doesn’t erase the original branch. You can preserve it by creating a new branch. Or, you can recover it from reflog. I use it to preserve the original development history. Then I submit the edited/crafted history/branch upstream.