UnderOpenSky
baseline neural therapy
They sometimes use the backup so there should be a * in front.
As in *.office365.com etc.
I grabbed a list from MS which had those and more. Every days a school day, especially in this job.
They sometimes use the backup so there should be a * in front.
As in *.office365.com etc.
I was vaguely familiar with that. I would disagree on the binary representation where in a basic CPU 8 bits can represent 127 to -128, the most significant bit used to denote pos / neg.They can, but this is one of those comparatively rare occasions when an understanding of maths can be useful.
The bottom line is that computers store numbers as a series of bits. Traditionally, those numbers would have been integers, stored in various lengths - 8 bit gives you 0-255, 16 bits gives you 0-65535, and so on.
But that doesn't help with any calculation that involves a non-integer value.
If you have long enough integers, you can apply a "scaling factor" (in software), so that the value 10000 (decimal) stored in your integer is actually interpreted as 1.0000, giving you 4 decimal places.
But that's clunky, and might be good for accounting or other "counting" jobs, but falls down a bit when it comes to scientific calculations, where you might be dealing with a very wide range of numbers. And for that we need floating-point maths. On a computer, floating point maths is roughly equivalent to the way scientists express very big or small numbers as a power of 10: 6.02x10^-33, except they do it in binary. A lot of this is driven by the development of floating point co-processors, to which a CPU could offload the task of performing a cycle-hungry floating point calculation, and that got formalised into an IEEE standard for representation of floating point numbers. Essentially, you have an integer mantissa, which is the number itself, and an (also integer) exponent, which is the power of 2 that the number must be raised to in order to get the real value out.
The snag here is that, even though IEEE 754 (I looked it up) specifies some seriously big word lengths, some numbers in decimal just won't resolve exactly into the mantissa/exponent format. Notably 0.1 - the IEEE 754 equivalent of 1/3 - has the problem that no matter how many 3s you add after the decimal point, it never exactly expresses the value of that fraction.
And there's a lot more of those numbers in the binary system than there is in the decimal one.
I think one of those clunky old-skool programmer skills was in knowing how accurate your answer needed to be, letting the floating point thing do its magic, and then representing the number (ie printing it ) in a real-world format that ignores those weird digits down in the far end of the fractional bit, maybe doing a bit of judicious programmed tidying up at intermediate phases of the process...
So, if you store and retrieve 0.1 as a floating point number, what you get back is 0.100000001490116119384765625.
But you're never going to present it to your end user like that - you apply a format to the number to enable it to display in a meaningful way, and the size of the error is too tiny to be significant for most purposes. Depending on the practicalities, you might want to round the display of the number to 2 decimal places for, eg., currency, or perhaps 4 for some measurement thing. Either way, the error noise isn't showing up until the ninth digit of the fractional part, so it's lost in the weeds. Except for some very involved calculations with FP numbers, those errors will rarely approach a point where they fundamentally mess things up. Although there are various defensive programming approaches aimed at catching/checking things along the way.
Oops, that went on a bit - I got a bit misty-eyed about the Good Old Days, and long integers, floating point coprocessors, etc.
Fair point on the sign bit I was trying to keep it very simple...I was vaguely familiar with that. I would disagree on the binary representation where in a basic CPU 8 bits can represent 127 to -128, the most significant bit used to denote pos / neg.
But even with rounding factors 7-3 is still 4 and not 6.
I grabbed a list from MS which had those and more. Every days a school day, especially in this job.
Can you post a link when you get a chance please. Just for future reference, I looked but failed to find and post one for you before.
Cheers!Here it is. I need to verify I got the auto discover correct tomorrow, but it's been one of the days and it's good enough for now!
Office 365 URLs and IP address ranges - Microsoft 365 Enterprise
Summary: Office 365 requires connectivity to the Internet. The endpoints below should be reachable for customers using Office 365 plans, including Government Community Cloud (GCC).learn.microsoft.com
I had a course on high-voltage valve electronics at uni the lecturer gave the acceptable margin for design errors for voltages etc as 50%.I think one of those clunky old-skool programmer skills was in knowing how accurate your answer needed to be, letting the floating point thing do its magic, and then representing the number (ie printing it ) in a real-world format that ignores those weird digits down in the far end of the fractional bit,
print ("Balance:", round(Balance, 2))maybe doing a bit of judicious programmed tidying up at intermediate phases of the process...
Yes bit of a cop-out I felt, existentialist explained all the simple stuff but the philosophical question went completely unansweredI was vaguely familiar with that. I would disagree on the binary representation where in a basic CPU 8 bits can represent 127 to -128, the most significant bit used to denote pos / neg.
But even with rounding factors 7-3 is still 4 and not 6.
I was thinking more along the lines of rounding out the errors at the ends of critical functions.I had a course on high-voltage valve electronics at uni the lecturer gave the acceptable margin for design errors for voltages etc as 50%.
print ("Balance:", round(Balance, 2))
def very_complicated_calculation(...)
value = GetComplicatedValue(...)
return round(value, 4) # 4 decimal places enough in this hypothetical case
You do that, too, but you trim up your untidy numbers where you think the errors might accumulate. It's usually not necessary.That's good, but not quite as elegant as my solution I feel
Valves are pretty robust things.I had a course on high-voltage valve electronics at uni the lecturer gave the acceptable margin for design errors for voltages etc as 50%.
Thanks for the recommendation. Really interesting, (contrary to my first impression which is why I've taken so long to respond ) really easy to follow, and slightly boggling.Oh if you want to do networking in particular. Grab Nmatp andthe nmap cookbook. It’s a great tool for learning about networking, ports all that stuff. Just don’t run it against a computer you don’t own. Just in case. Legal wise.
Thanks for the recommendation. Really interesting, (contrary to my first impression which is why I've taken so long to respond ) really easy to follow, and slightly boggling.
So am I right that if you’ve got someone’s ip address you can:
* circumvent firewalls and anti-intrusion systems to find things like what operating system someone’s running, the software versions of services on any open ports
* use zombie scans, decoy addresses and spoof MAC addresses and presumably VPN to avoid detection
My firewall says all incoming ports are blocked and all outgoing ports open but the scan shows open ports. Is that because I’m scanning outwards from my network to the router so those ports look open?
Interested to learn how you know who's attacked your system, by the way.
so the public IP is normally that of the router? The private IPs are the internal addresses of the computers on the network?
Any more of these left?