The future in time series.

Abstract
Time series are relevant when talking about machine learning because usually, the most popular machine learning algorithms are algorithms that do not consider time variables but are based on input and output values, which limits several of the machine learning algorithms in changing scenarios. But in the real world, most scenarios require algorithms that vary over time, which is why the main feature of time-series analysis and/or forecasting algorithms is the use of functions rather than predefined fixed characteristics, both to observe trends and to predict future values.
Introduction
We have all wanted to predict the future, humanity itself is advancing in that purpose but the truth is that they need or illusion about the future, guarantees our survival as a species, can be seen in too many areas, such as the current year’s harvest, weather conditions, the demographics of a particular territory, the projection or performance of a company, etc.
Throughout history, we have used many unreliable techniques and methods for this. Now in our times, we have grouped different areas of knowledge that have resulted in professions such as scientists and data engineers, who use algorithms and methodologies of automatic learning and artificial intelligence that provide a dimension of time, as well as multiple dimensions of channel, which makes the data in the representation of the reason for the change of something over time. There are many events in which temporal data is very natural, and the order of this data becomes important, such as the location of words, where characters and words are data, which in the correct context and order provided by time make sense. Data is data, but in this context time is order.
Data and pre-processing
We have at our disposal bitcamp data for the historical value of Bitcoin. But since the data is raw we must process it in such a way that our model can work with it. The first thing to do is to extract the data from the source file with the help of a function, then sort it into the right format for processing.
Segregation and standardization of data
For our model, we will need three sets of data. One for training, one for validation, and one for testing. Therefore, we will divide the sets within the data universe by percentages, and normalize the data with the help of the mean and standard deviation applied to the entire data set.
At this point, we have three data sets, trimmed, filled, cleaned, sampled, and normalized, almost ready to feed the model. However, we have to feed the model through data windows to process it.

Once created the new variables and the cut, we define within that function the division of the data in windows. Later, we load the data in a tf.data.Dataset format and we make the division.
Our job is to predict the closing value at the end based on the history of the past 24 hours, for this, the model needs both entries and labels according to this time window, for which we use the documentation of Tensorflow and create a class for this purpose.
Model Architecture
For this exercise, I use the RNN architecture because it works correctly for sequential data, as a sub-architecture I choose the LSTM architecture.
The LSTM architecture is a kind of recurrent neural network (RNN), the RNN in turn is a neural network that tries to model the time-dependent behavior or sequence by feeding back the output of a neural network layer in a given step to the same network layer a step ahead.
Conclusion
Time series forecasting is a powerful tool, which has its limitations, previous data does not always provide enough information to predict the future.