Yes, it is a huge pain, especially if you want to have round-trip interoperability with humans using markup. Wikipedia had a major challenge with this when they decided to add a rich text editor alongside wiki markup.
Yes, it is a huge pain, especially if you want to have round-trip interoperability with humans using markup. Wikipedia had a major challenge with this when they decided to add a rich text editor alongside wiki markup.
Surge suppressors do not drop extra voltage to ground. They selectively short out surges between whatever two conductors have a high potential between them.
No ground conductor means there cannot be a high potential between it and anything else!
If your company is using story points to “measure” developers, they are completely misusing that concept, and it probably results in a low-teamwork environment (as you describe).
The purpose of story points is so a team can say “we’re not taking more than X work for the next two weeks. Make sure it’s the important stuff.” It is a way to communicate a limit to force prioritization by the product owner.
And, in fact, data shows that point estimation so poorly converges on reality that teams may as well assign everything a “1”. The key technique is to try to make stories the same size, and to reduce variability by having the team swarm/mob to unblock stuck work.
Who creates these tasks? They need to close the year old items, reevaluate the work and break it down into sub-5-day chunks. If there are so many unknowns that it’s impossible to do that, the team needs to brainstorm how to resolve them.
fyi the NeXT OS is called NeXTSTEP.
Is there a reason you post identical posts to multiple instances?
Betavoltaics have been around for 40+ years. https://en.m.wikipedia.org/wiki/Betavoltaic_device
This device generates microwatts. You’d need thousands of them in parallel to power a typical mobile phone.
All of their tech job openings are in India or Vietnam. I’d have assumed a major US health data handler would be developing onshore!
For most organizations, the cost of paying programmers far exceeds the cost of CPU time; benchmarks really should include how long the solution took to envision/implement and how many follow up commits were required to tune it.
Performance is the major flaw with microkernels that have prevented the half-dozen or more serious attempts at this to succeed.
Incurring context switching for low-level operations is just too slow.
An alternative might be a safe/provable language for kernel and drivers where the compiler can guarantee properties of kernel modules instead of requiring hardware guarantees, and it ends up in one address space/protection boundary. But then the compiler (and its output) becomes a trusted component.