Optimization of Satellite Communication

Published: 2020-04-27 10:39:55
1759 words
6 pages
15 min to read
Type of paper: 
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The Satellite communication field has been offering data and voice services from the very beginning of its conception. Over the past few years, commercial deployment of data and voice services has become a global phenomenon. However, satellite communication was never designed for the transmission of data. Therefore, optimization of protocol behavior is normally needed to alter and increase efficiency. Satellites prove to be the ideal means for offering intranet and Internet access to remote locations and over long distances. However, Internet protocols are typically not optimized for satellite conditions, thus, more often than not, the throughput over satellite networks gets restricted to only a section of the obtainable bandwidth. Isotropic engineering has tried to address and effectively overcome such issues. The IP/TCP protocol was never designed to perform well on noisy channels or over high-latency. Links to Geostationary satellites can be noisy and are innately slow. IP/TCP based data communication is useless at Bit Error Rates of 10-7 and higher. The TCP/IP disadvantages in the typical satellite environment include window size, degradations that are due to slow-start, and acknowledgment frequency that are well known. Attempts to deliver IP over satellite have existed, but satellite technologies have focused on connection-oriented transmission protocols that are convenient for voice traffic rather than Internet Protocol, and unnecessarily waste expensive capacity. Network congestion is a state that occurs when a node or link contains so much data that its service quality deteriorates. Typical effects of network congestion include the blocking of new connections, packet loss, and queuing delays. A result of these effects is that an incremental increase in the load offered leads to either an actual reduction in network throughput or a small increase in network performance. Network protocols that use forced retransmissions to reimburse for packet loss often keep systems in a state of congestion, even after the previous load is reduced to a level that would not have facilitated network congestion. Thus, systems using such protocols can exhibit two steady states of equal load level. The stable state with low throughput is known as the congestive collapse. Congestion control involves controlling traffic and data entry in a telecommunication network. This helps to avoid congestive collapse by attempting to prevent oversubscription of any of the link or processing capabilities of the intermediate networks and nodes and thus taking steps, such as reducing the rate of sending packets that would reduce resources. To avoid the possibility of a congestive network meltdown, TCP assumes that most, if not all, data loss, is caused by traffic congestion and, therefore, responds by reducing transmission rate using a method called slow-start, to recover the throughput capacity on previous connection setup. Slow-start then sends a packet across the physical connection and awaits a response. Once received, the next packet will be sent even faster. This particular procedure will be repeated until the speed of the link is recovered. With a half-second delay between responses, throughput is slowed down. Evidently, if slow-start is avoided, a significant drag on IP/TCP performance can be eliminated. Latency is aggravated in TCP/IP transmissions by the fact that the protocol needs data acknowledgments for all packets sent across the link. The simple, heuristic data acknowledgment pattern used by TCP is not suitable to highly asymmetric bandwidth conditions or a long latency. The TCP receiver frequently sends acknowledgments for received data received back to the sender in order to provide reliable data transmission. It ensures reliable communication under congested and uncertain network conditions. IP/TCP has a built-in windowing system that ascertains the highest possible throughput while balancing the risk of retransmission of dropped packets. It works by enabling a transmitter to send packets waiting for an acknowledgment from the receiver. Ideally, the window size is set to accommodate low-latency/terrestrial connections. So the windows tend to be precise with respect to a satellite link but data would need transmission when packets are released. What derails satellite connections are short windows. It opens up the constricted flow of packets that require tuning window size for the expected noise performance of the link and the known latency. To overcome the slow start, Isotropic developed TCP Acceleration protocol that operates on the link to the satellite. The isotropic system reduced BER to levels that integrate well with IP/TCP work rates. In the Isotropic domain, TCP transfers are destroyed and redeveloped at both ends of the satellite link. Over this link, a new protocol is used that is both optimized and transparent to TCP/IP for the satellite environment. The characteristics of Bit Error Rates and latency on the satellite link are known; therefore, a suitable window size should be selected. Even under rain and uneven weather conditions, the Isotropic system will still perform at 10-9 BER. Under the clear and good weather, the system will perform better at 10-10 BER. At such work rates, large window sizes are possible, which eliminates throughput degradation because of the size of the window. Low BER allows a minimized acknowledgment frequency, due to increase in network efficiency, without disfavoring reliability. The resulting reduced acknowledgment frequency and low Bit Error Rate of the Isotropic network help to reduce the drag on IP/TCP work rate because of latency. This system works at ~80% of link capacity, and this is due to the advanced Turbo Product Code mechanism that is implemented. Measures should be put in place to avoid congestion. However, the Isotropic Protocol does not use unnecessary congestion avoidance mechanisms for the hop over the satellite between the Isotropic Modems. Active queue management is the drop of network packets inside a transmit buffer zone that is associated with one who controls the network interface. This task is carried out by the network scheduler, whom for this purpose, uses various algorithms. Such algorithms include random early detection. One criterion is to use early random early detection on the port queue buffer zone of the networks equipment. RED signals to receiver and sender by deleting some packets. Another algorithm is the robust random early detection. This algorithm was proposed to improve the TCP throughput against denial-of-service attacks, specifically low-rate denial-of-service attacks. Another one is the Flow based-RED/WRED. Some network equipment is fitted with ports that can follow and measure each flow and are thereby able to signal to a significant bandwidth flow according to some policy. Another approach is by use of IP Explicit Congestion Notification. With this mechanism, explicit congestion is signalled by the use of a protocol bit. This is even better than the indirect packet delete congestion notification performed by the WRED/RED algorithms. A number of outdated network equipment drop packets with the ECN bit set, rather than ignoring the bit. Sally Floyd, ECN. When a router obtains a packet marked as ECN capable and uses RED to anticipate congestion, the router sets up an ECN flag that would alert the sender of traffic congestion. Therefore, the sender has to respond by decreasing the transmission bandwidth. Cisco Systems have taken serious steps on the Catalyst 4000 series with the engine (IV) and engine (V). These engines have the capability to classify all flows as adaptive or aggressive. They also ensure that no flows fill the port queues for a long period of time. Backward ECN is another identified network congestion protocol. The mechanism involves the use of ICMP source quench messages to implement a primary ECN mechanism for IP networks, keeping the congestion alerts at the IP level and requiring no negotiations between network endpoints. We can efficiently avoid Congestion reducing the amount of traffic flowing into a particular system. When an application requests for a web page, a graphic or a large file, a window of between 32K and 64K is advertised. The server then sends a full window data. When there are many applications requesting downloads, the data creates a congestion area at an upstream provider by flooding the queue faster than it can be emptied. By using a device to reduce the window advertisement, the remote servers will have to send fewer data, thus reducing congestion and allowing traffic to flow. This technique reduces congestion in a network by a factor of 40.

ReferencesAgrawal, D. P., Xie, B., & World Scientific (Firm). (2010). Encyclopedia on ad hoc and ubiquitous computing: Theory and design of wireless ad hoc, sensor, and mesh networks. Singapore: World Scientific Pub. Co.

Beck, R., Hinkel, K., Eisner, W., Norda, J., Hoang, N., Fall, K., . . . Maffei, A. (2007). GPSDTN: Predictive Velocity-Enabled Delay-Tolerant Networks for Arctic Research and Sustainability. doi:10.1109/ICIMP.2007.20Beck, R., Hinkel, K., Eisner, W., Norda, J., Hoang, N., Fall, K., . . . Maffei, A. (2007). GPSDTN: Predictive Velocity-Enabled Delay-Tolerant Networks for Arctic Research and Sustainability. doi:10.1109/ICIMP.2007.20Farrell, S., & Cahill, V. (2006). Delay- and disruption-tolerant networking. Boston: Artech House.

Gao, L., Yu, S., Luan, T. H., & Zhou, W. (2015). Delay tolerant networks.

E. Altman, T. Basar and R. Srikant. \Congestion control as a stochastic control problem with action delays", Automatica, December 1999.

Gungor, V. C., & Hancke, G. P. (2013). Industrial wireless sensor networks: Applications, protocols, and standards. Boca Raton, FL: CRC Press, Taylor & Francis Group.

International Conference on Ubiquitous Information Technologies & Applications, & Han, Y. -H. (2013). Ubiquitous information technologies and applications: CUTE 2012. Dordrecht: Springer.

International Conference on Ubiquitous Information Technologies & Applications, & In Jeong, Y. -S. (2014). Ubiquitous information technologies and applications: CUTE 2013.

Kim, Y., Koo, J., Jung, E., Nakano, K., Sengoku, M., & Park, Y. (2010). Composite methods for improving Spray and Wait routing protocol in Delay Tolerant Networks. doi:10.1109/ISCIT.2010.5665178LEO satellite communication networksa routing approach S. A. M. Makki, Niki Pissinou and Philippe Daroux 22 MAY 2003

Mangrulkar, R. S., & Atique, M. (2010). Routing protocol for Delay Tolerant Network: A survey and comparison. doi:10.1109/ICCCCT.2010.5670553Niyato, D., Wang, P., & Teo, J. C. (2009). Performance analysis of the vehicular delay tolerant network. doi:10.1109/WCNC.2009.4917891Optimization of TCP Satellite Communication in Inmarsat Network.Zbynek, K.; Department of Telecommunication Eng., Czech Tech. Univ. in Prague, Prague, Czech Republic ; Marek, N. ; Leos, B.

Al-Bahadili, 2012, p. 282) Al-Bahadili, H. (2012). Simulation in computer network design and modeling: Use and analysis. Hershey, PA: IGI Global

Resta, G., & Santi, P. (2012). A Framework for Routing Performance Analysis in Delay Tolerant Networks with Application to Noncooperative Networks. IEEE Transactions on Parallel and Distributed Systems. doi:10.1109/TPDS.2011.99Sally Floyd, Van Jacobson. Random Early Detection Gateways for Congestion Avoidance (1993). IEEE/ACM Transactions on Networking, vol.1(4): pp.397413. Invented Random Early Detection (RED) gateways

Sanjeewa Athuraliya, Victor H. Li, StevenH. Low, and Qinghe Yin, \REM: activequeuemanagement," IEEE Network,vol.15,no.3,pp.48{53,May/June 2001

Vasilakos, A., Zhang, Y., & Spyropoulos, T. (2012). Delay tolerant networks: Protocols and applications. Boca Raton,...


Request Removal

If you are the original author of this essay and no longer wish to have it published on the SpeedyPaper website, please click below to request its removal: