So, we talked about integers. Now, let's talk about type conversions a little bit before we go on to the next basic types. There are cases where you need to convert a number or a value from one type to another type. For that, you use the type conversion. Now, these conversions are not always possible. You can't necessarily do those conversions, but when they exist, when you can do them, here's how you do them. Let's say, for instance, you've got some integers of different lengths. So, I've got variable X. It's a 32-bit integer, and variable Y, it's a 16-bit integer. I want to say, X equals Y. I want to assign one to the other. I want to do some operations with them together. That, as shown, would actually fail. Reason why they'll fail is because these two integers, they're actually two different types of integers. So, the compiler sees them as two different types. Int32 is a different type than int16. So, it'll throw an error when you try to set one to the other, because when you assign like that, the two things, the things on the left-hand side and the right-hand side, they have to be the same type. It sees them. Even though they're both integers, the fact they're different length means they're different types, and it throws an error. So, in order to do this, you've got to convert one to the other. So, for instance, you might say, let me take that Y, which is an int16, and convert it to an int32, and then, assign X to that. Then, that would work. So, the way you do type conversions like that is, you use this T operation, where T is the name of the type. So, if I want to convert Y int16 into an int32, I just use this built-in function int32. That will take whatever its argument is and try to convert it into an int32. So, X equals int32 Y. That's what it would do: take Y, which is int16, convert it to int32. Now, this is a conversion that is possible. All it has to do is, its sign extends the Y integer. So, the Y value, you don't want to change Y's value. It's equal to two. So, Y is a 16-bit version of two, and they want to make it into a 32-bit version. So, what it's going do is, take the sign bit and just extend it. So, if the sign bits is zero, meaning it's a positive value, it just puts 16 zero bits, and the high bits gives you a 32-bit number, which is equivalent, and also, a 32-bit representation of two, [inaudible] S equal to that. So, this is a type of conversion that's possible. So, it's easy to do. So, you just use this int32 function to do it. Note that there are other type of conversions that you can't do so easily, and it will fail on those, but some of them are possible, like this one. So, another type, besides integers, are floating points. So, floating points, they're basic real numbers. Now, depending on how many bits long the floating point is, say, you got float32, that's going to give you approximately six decimal bit digits of precision. Float64 is going to give you approximately 15 decimal digits of precision. So, you figure out how many bits you want to use, how big you want it to be based on how much precision you need. Often, it tends to be, you want to go longer than shorter because precision errors are a common problem in floating point arithmetic, so you want to use more precision. The more precision is probably the better. Of course, there's a space issue. You use more memory if you make them longer, and also, performance changes. But still, precision errors are an issue sometimes. So, you can express floating point numbers with decimals or in scientific notation. So, we can see two, var X is float64, 123.45. That's a decimal. It'll make it a floating point representation. Also, you can represent the same thing scientifically with this E as the exponent for the 10. So, this is base 10. So, E2 means 10 to the two. Also, you can represent complex numbers. If you want to, they have complex numbers. So, if you remember complex numbers from high school or wherever you learned it, you get the real part and the imaginary part. So, this is how you create a complex number. You use this complex function. You give two arguments. The first number is the real, the second number is the imaginary. So, two plus three I would be that complex number, if you're using complex. So, first, we are going to talk about strings. In order to talk about strings, we need to talk about ASCII code and Unicode. So, strings are going to be sequences of bytes. We'll see that in the next slide, but each individual element, each byte in a string, strings are made to represent different characters that you see. So, often, strings are made for printing. They don't have to be for printing, but they're often made to represent printed things. So, for instance, the string hello world, that's something that's meant to be printed and seen by a user. So, now, each one these characters that you want to store in a string, each character has to be coded according to a standardized code. ASCII was basically the first accepted one, American Standard Code for Information Exchange, and it's just a character coding. So, each character that you want to represent is represented with an 8-bit code. So, for instance, a capital A. Capital A in ASCII is the number 41 in hexadecimal. I just know that. I tell my head it's a common code. But it's 41 in hexadecimal. So, that's an 8-bit code. So, ASCII is an 8-bit long code, which means it can maximum represent 256 possible characters. Really does 128 because one of the bits is used for something else. So, that's not a lot of characters. So, an 8-bit code is sufficient for English because they aren't that many letters in the alphabet. But once you start incorporating other characters that you need to include, so, for instance, Chinese is a good example because there are a lot of characters in Chinese, you can't use an 8-bit code and hope to represent Chinese. So, once you start trying to look at all these different character sets in different languages and different characters that maybe even aren't part of languages but things that you want to represent anyway, that you want to show and have appear on the screen, you need a lot more than 8-bits. So, that's what Unicode is for. Unicode is a character code that is a 32-bit long code. So, you can represent two to the 32, which is a lot bigger, two gig, I think. So, it's a lot bigger. So, that's a lot of characters. Now, UTF-8 is, say, a subset of Unicode. It's a variable length code. So, it can be 8-bit, but it can go up to 32-bit. The first set of codes in UTF-8 match ASCII. So, all the ASCII code values are the same as their UTF-8 values. So, for instance, capital A, it's in hexadecimals of 41 in ASCII. It's also a 41 in UTF-8. Now, UTF-8 also includes a lot of other codes like, for instance, Chinese characters. Those aren't in the first 128, they're outside of that. They require more bytes. You can't just use an 8-bit code to represent those or maybe those of 16-bits or 32-bits. So, UTF-8 is a variable length code, but it represents a lot more characters than you can represent with ASCII. Now, the default in Go is UTF-8. In UTF-8 or Unicode, code point is a term for a Unicode character. So, there can be up to two to the 32 code points. In Go, they call a code point a rune. So, rune is just a term for a code point. So, the capital A character, it has a rune, which is represented with 0x41. In hexadecimals, it's a 41. You call that it's rune. Now to strings. Now, strings are arbitrary sequences of bytes represented in UTF-8. So, each byte is a rune represented as a UTF-8 code point. So, the strings, they're read only. You can't modify a string. You can make a new string that is a modified version of an existing string, but you can't modify an existing string. Often meant to be printed or displayed to a user. String literal is just a string that's notated with double quotes. So, for instance, if I say, X colon equals hi there, that's a string literal. That's a sequence of bytes. Each one those characters, H, I, space, T, H, E, R, E, each one of those is going to be represented as a rune, a UTF-8 code point, and they're put together. An array of those, we'll cover arrays soon, but an array of those is going to be a string.