Friday 19 July 2013

Full forms of computer related terms

* HTTP - Hyper Text Transfer Protocol.
* HTTPS - Hyper Text Transfer Protocol
Secure.
* IP - Internet Protocol.
* URL - Uniform Resource Locator.
* USB - Universal Serial Bus.
* VIRUS - Vital Information Resource
Under Seized.
* 3G - 3rd Generation.
* GSM - Global System for Mobile
Communication.
* CDMA - Code Divison Multiple
Access.
* UMTS - Universal Mobile
Telecommunication System.
* SIM - Subscriber Identity Module.
* AVI = Audio Video Interleave
* RTS = Real Time Streaming
* SIS = Symbian OS Installer File
* AMR = Adaptive Multi-Rate Codec
* JAD = Java Application Descriptor
* JAR = Java Archive
* JAD = Java Application Descriptor
* 3GPP = 3rd Generation Partnership
Project
* 3GP = 3rd Generation Project
* MP3 = MPEG player lll
* MP4 = MPEG-4 video file
* AAC = Advanced Audio Coding
* GIF = Graphic Interchangeable
Format
* JPEG = Joint Photographic
ExpertGroup
* BMP = Bitmap
* SWF = Shock Wave Flash
* WMV = Windows Media Video
* WMA = Windows Media Audio
* WAV = Waveform Audio
* PNG = Portable Network Graphics
* DOC = Document (Microsoft
Corporation)
* PDF = Portable Document Format
* M3G = Mobile 3D Graphics
* M4A = MPEG-4 Audio File
* NTH = Nokia Theme (series 40)
* THM = Themes (Sony Ericsson)
* MMF = Synthetic Music Mobile
Application File
* NRT = Nokia Ringtone
* XMF = Extensible Music File
* WBMP = Wireless Bitmap Image
* DVX = DivX Video
* HTML = Hyper Text Markup
Language
* WML = Wireless Markup Language
* CD - Compact Disk.
* DVD - Digital Versatile Disk.
* CRT - Cathode Ray Tube.
* DAT - Digital Audio Tape.
* DOS - Disk Operating System.
* GUI - Graphical User Interface.
* HTTP - Hyper Text Transfer Protocol.
* IP - Internet Protocol.
* ISP - Internet Service Provider.
* TCP - Transmission Control Protocol.
* UPS - UninterruptiblePower Supply.
* HSDPA - High Speed Downlink
Packet Access.
* EDGE - Enhanced Data Rate for
GSM[Global System for Mobile
Communication] Evolution.
* VHF - Very High Frequency.
* UHF - Ultra High Frequency.
* GPRS - General Packet Radio
Service.
* WAP - Wireless Application Protocol.
* TCP - Transmission Control
Protocol .
* ARPANET - Advanced Research
Project Agency Network.
* IBM - International Business
Machines.
* HP - Hewlett Packard.
* AM/FM - Amplitude/ Frequency
Modulation.
* WLAN - Wireless Local Area Network

Pointer Problems

Pointer Variable in C

Sunday 14 July 2013

Some Complexity Related Papers by Oded Goldreich

The following papers can be obtained in PostScript (papers in REVERSE CHRONOLOGICAL ORDER):

Low quality PostScript files of very old papers

Saturday 13 July 2013

Introduction to Complexity Theory

Lecture 1: The P vs NP Question. We review the fundamental question of computer science, known as the P versus NP question: Given a problem whose solution can be verified efficiently (i.e., in polynomial time), is there necessarily an efficient method to actually find such a solution? Loosely speaking, the first condition (i.e., efficient verification) is captured in the definition of NP, and the second in that of P. The actual correspondence relies on the notion of self-reducibility, which relates the complexity of determining whether a solution exists to the complexity of actually finding one.
Lecture 2: NP-completeness and Self Reducibility. We prove that any relation defining an NP-complete language is self-reducible. This will be done using the SAT self-reducibility (proved in Lecture 1), and the fact that SAT is NP-Hard under Levin Reductions. The latter are Karp Reductions augmented by efficient transformations of NP-witnesses from the original instance to the reduced one, and vice versa. Along the way, we give a simple proof of the existence of NP-Complete languages (by proving that Bounded Halting is NP-Complete). NP-completeness.
Lecture 3: More on NP and some on DTIME. In the first part of this lecture we discuss two properties of the complexity classes P, NP and NPC: the first property is that NP contains problems which are neither NP-complete nor in P (provided NP not equal P), and the second one is that NP-relations have optimal search algorithms. In the second part we define new complexity classes based on exact time bounds, and consider some relations between them. We point out the sensitivity of these classes to the specific model of computation (e.g., one-tape versus two-tape Turing machines).
Lecture 4: Space Complexity. We define ``nice'' complexity bounds; these are bounds which can be computed within the resources they supposedly bound (e.g., we focus on time-constructible and space-constructible bounds). We define space complexity using an adequate model of computation in which one is not allowed to use the area occupied by the input for computation. Before dismissing sub-logarithmic space, we present two results regarding it (contrasting sub-loglog space with loglog space). We show that for ``nice'' complexity bounds, there is a hierarchy of complexity classes - the more resources one has the more tasks one can perform. One the other hand, we mention that this increase in power may not happen if the complexity bounds are not ``nice''.
Lecture 5: Non-Deterministic Space. We recall two basic facts about deterministic space complexity, and then define non-deterministic space complexity. Three alternative models for measuring non-deterministic space complexity are introduced: the standard non-deterministic model, the online model, and the offline model. The equivalence between the standard and online models and their exponential relation to the offline model are proved. We then turn to investigate the relation between the non-deterministic and deterministic space complexity (i.e., Savitch's Theorem).
Lecture 6: Non-Deterministic Logarithmic Space. We further discuss composition lemmas underlying previous lectures. Then we study the complexity class NL (the set of languages decidable within Non-Deterministic Logarithmic Space): We show that directed graph connectivity is complete for NL. Finally, we prove that NL = coNL (i.e., NL is closed under complementation).
Lecture 7: Randomized Computations. We extend the notion of efficient computation by allowing algorithms (Turing machines) to toss coins. We study the classes of languages that arise from various natural definitions of acceptance by such machines. We focus on probabilistic polynomial-time machines with one-sided, two-sided and zero error probability (defining the classes RP (and coRP), BPP and ZPP). We also consider probabilistic machines that uses logarithmic spaces (i.e., the class RL).
Lecture 8: Non-Uniform Polynomial Time (P/poly). We introduce the notion of non-uniform polynomial-time and the corresponding complexity class P/poly. In this (somewhat fictitious) computational model, Turing machines are provided an external advice string to aid them in their computation (on strings of certain length). The non-uniformity is expressed in the fact that an arbitrary advice string may be defined for every different length of input. We show that P/poly ``upper bounds'' the notion of efficient computation (as BPP} subseteq P/poly), yet this upper bound is not tight (as P/poly contains non-recursive languages). The effect of introducing uniformity is discussed, and shown to collapse P/poly to P. Finally, we relate the P/poly versus NP question to the question of whether NP-completeness via Cook-reductions is more powerful that NP-completeness via Karp-reductions. This is done by showing, on one hand, that NP is Cook-reducible to a sparse set iff NP subset P/\poly, and on the other hand that NP is Karp-reducible to a sparse set iff NP=P.
Lecture 9: The Polynomial Hierarchy (PH). We define a hierarchy of complexity classes extending NP and contained in PSPACE. This is done in two ways, shown equivalent: The first by generalizing the notion of Cook reductions, and the second by generalizing the definition of NP. We then relate this hierarchy to complexity classes discussed in previous lectures such as BPP and P/poly: We show that BPP is in PH, and that if NP subseteq P/poly then PH collapses to is second level.
Lecture 10: The counting class #P and Approximating it. The class NP captures the difficulty of determining whether a given input has a solution with respect to some (tractable) relation. A potentially harder question, captured by the class #P, refers to determining the number of such solutions. We first define the complexity class #P, and classify it with respect to other complexity classes. We then prove the existence of #P-complete problems, and mention some natural ones. Then we try to study the relation between #P and NP more exactly, by showing we can probabilistically approximate #P using an oracle in NP. Finally, we refine this result by restricting the oracle to a weak form of SAT (called uniqueSAT).
Lecture 11: Interactive Proof Systems. We introduce the notion of interactive proof systems and the complexity class IP, emphasizing the role of randomness and interaction in this model. The concept is demonstrated by giving an interactive proof system for Graph Non-Isomorphism. We discuss the power of the class IP, and prove that coNP subseteq IP. We discuss issues regarding the number of rounds in a proof system, and variants of the model such as public-coin systems (a.k.a.\/ Arthur-Merlin games).
Lecture 12: Probabilistically Checkable Proof (PCP). We introduce the notion of Probabilistically Checkable Proof (PCP) systems. We discuss some complexity measures involved, and describe the class of languages captured by corresponding PCP systems. We then demonstrate the alternative view of NP emerging from the PCP Characterization Theorem, and use it in order to prove non-approximability results for the problems max3SAT and maxCLIQUE.
Lecture 13: Pseudorandom Generators. Pseudorandom generators are defined as efficient deterministic algorithms which stretch short random seeds into longer pseudorandom sequences. The latter are indistiguishable from truely random sequences by any efficient observer. We show that, for efficiently sampleable distributions, computational indistiguishability is preserved under multiple samples. We related pseudorandom generators and one-way functions, and show how to increase the stretching of pseudorandom generators. The notes are augmented by an essay of Oded. 
Lecture 14: Pseudorandomness and Computational Difficulty. We continue our discussion of pseudorandomness and show a connection between pseudorandomness and computational difficulty. Specifically, we show how the difficulty of inverting one-way functions may be utilized to obtain a pseudorandom generator. Finally, we state and prove that a hard-to-predict bit (called a hard-core) may be extracted from any one-way function. The hard-core is fundamental in our construction of a generator.
Lecture 15: Derandomization of BPP (The NW-generator). We present an efficient deterministic simulation of randomized algorithms. This process, called derandomization, introduce new notions of pseudorandom generators. We extend the definition of pseudorandom generators and show how to construct a generator that can be used for derandomization. The new construction differ from the generator that constructed in the previous lecture in it's running time (it will run slower, but fast enough for the simulation). The benefit is that it is relying on a seemingly weaker assumption.
Lecture 16: Derandomizing Space-Bounded Computations. We consider derandomization of space-bounded computations. We show that BPL subseteq DSPACE(\log^2 n), namely, any bounded-probability Logspace algorithm can be deterministically emulated in O(\log^2 n) space. We further show that BPL subseteq SC, namely, any such algorithm can be deterministically emulated in O(\log^2 n) space and (simultaneously) in polynomial time.
Lecture 17: Zero-Knowledge Proof Systems. We introduce the notion of zero-knowledge interactive proof system, and consider an example of such a system (Graph Isomorphism). We define perfect, statistical and computational zero-knowledge, and present a method for constructing zero-knowledge proofs for NP languages, which makes essential use of bit commitment schemes. We mention that zero-knowledge is preserved under sequential composition, but is not preserved under the parallel repetition.
Lecture 18: NP in PCP[poly,O(1)]. The main result in this lecture is NP subseteq PCP(poly,O(1)). In the course of the proof we introduce an NPC language ``Quadratic Equations'', and show it to be in PCP(poly,O(1)). The argument proceeds in two stages: First assuming properties of the proof (oracle), and then testing these properties. An intermediate result that of independent interest is an efficient probabilistic algorithm that distinguishes between linear and far-from-linear functions.
Lecture 19: Dtime(t) contained in Dspace(t/log t). We prove that Dtime(t(\cdot)) subseteq Dspace(t(\cdot)/\log t(\cdot)). That is, we show how to simulate any given deterministic multi-tape Turing Machine (TM) of time complexity $t$, using a deterministic TM of space complexity $t/\log t$. A main ingrediant in the simulation is the analysis of a pebble game on directed bounded-degree graphs.
Lecture 20: Circuit Depth and Space Complexity. We study some of the relations between Boolean circuits and Turing machines. We define the complexity classes NC and AC, compare their computational power, and point out the possible connection between uniform-NC and ``efficient'' parallel computation. We conclude the discussion by establishing a strong connection between space complexity and depth of circuits with bounded fan-in.
Lecture 21: Communication Complexity. We consider Communication Complexity - the analysis of the amount of information that needs to be communicated betwen two parties that wish to reach a common computational goal. We start with some basic definitions, considering both deterministic and probabilistic models for the problem, and annotating our discussion with a few examples. Next we present a couple of tools for proving lower bounds on the complexity of communication problems. We conclude by proving a linear lower bound on the communication complexity of probabilistic protocols for computing the inner product of two vectors, where initially each party holds one vector.
Lecture 22: Circuit Depth and Communication Complexity. The main result presented in this lecture is a (tight) nontrivial lower bound on the monotone circuit depth of s-t-Connectivity. This is proved via a series of reductions, the first of which is of significant importance: A connection between circuit depth and communication complexity. We then get a communication game and proceed to reduce it to other such games, until reaching a game called FORK. We conclude that a lower bound on the communication complexity of FORK, to be given in the next lecture, will yield an analogous lower bound on the monotone circuit depth of s-t-Connectivity.
Lecture 23: Communication Complexity of the FORK Game. We analyze the FORK game, introduced in the previous lecture. We give tight lower and upper bounds on the communication needed in a protocol solving FORK. This completes the proof of the lower bound on the depth of monotone circuits computing the function st-Connectivity.
Lecture 24: Average-Case Complexity. We introduce a theory of average-case complexity which refers to computational problems coupled with probability distributions. We start by defining and discussing the classes of P-computable and P-samplable distributions. We then define the class DistNP (which consists of NP problems coupled with P-computable distributions), and discuss the notion of average polynomial-time (which is unfortunately more subtle than it may seem). Finally, we define and discuss reductions between distributional problems. We conclude by proving the existence of a complete problem for DistNP.
Lecture 25: Computational Learning Theory. We define a model of automoatic learning called probably approximately correct (PAC) learning. We define efficient PAC learning, and present several efficient PAC learning algorithms. We prove the Occam's Razor Theorem, which reduces the PAC learning problem to the problem of finding a succinct representation for the values of a large number of given labeled examples.
Lecture 26: Relativization. In this lecture we deal with relativization of complexity classes. In particular, we discuss the role of relativization with respect to the P vs NP question; that is, we shall see that for some oracle A, $P^A = NP^A$, whereas for another $A$ (actually for almost all other $A$'s) $P^A \neq NP^A$. However, it also holds that $IP^A \neq PSPACE^A$ for a random $A$, whereas $IP = PSPACE$

Discrete Math for Computer Science Students

Network & System Security


Research Papers as well as ppt for them

TeamTitle of paperPPT
1Daniel J Mathew, Eldhose PeterAll your iFRAMEs point to usppt
2Neeraj, Sidhartha GuptaAbusing Social Networks for Automated User Profilingppt
3Gaurav, VineetPersuasive Cued Click-Pointsppt
4Anuj Kalia, Shashank GuptaDetecting Forged TCP Reset Packetsppt
5Jagmeet Bali, Agastya NandaUsing Cloud Storage as Attack Vector and Online Slack Spaceppt
6Siddharth Bora, Salik Jamal WarsiVisual Cryptographyppt
7Ravee, RahulClickjacking: Attacks and Defensesppt
8Alankrit, Nikita GuptaDetecting Spammers on Social Networksppt
9Faisal, VigyaDetecting Spammers on Social Networksppt
10Abhishek Gangwal, Rohan SharmaLeaking Browser Historyppt
11Jatin Kumar, Ankush JindalDetecting Return-to-libc Buffer Overflow Attacks Using Network Intrusion Detection Systemsppt
12Parth, AbhinnA combinatorial approach to detecting buffer overflow vulnerabilitiesppt
13Shehbaz Jaffer, Anshuj GargHow unique is your web browserppt
14Ashutosh, Chandra PrakashSybilInfer: Detecting Sybil Nodes using Social Networksppt
15Rahul, AswinDesign and Evaluation of a Real-Time URL Spam Filtering Serviceppt
16Siddhant, DeepakPrivacy-Preserving Social Plug-insppt
17Rajat, ParikshitAurasium: Practical Policy Enforcement for Android Applicationsppt
18Björn Holm, Pasi SaarinenPwning Random Number Generatorsppt
19Jwala N Rao, Deepika MDefense against Rubber hose attacksppt
20Kritarth, Arunim samatDissent: Accountable Anonymous Group Messagingppt
21Manav, JagjeetThese Aren’t the Droids You’re Looking Forppt
22Abhinav, HiteshProtecting Browsers from Cross-Origin CSS Attacksppt

Wednesday 10 July 2013

Highlight the Text in the Margin

There are some other tricks for active reading. One, of course, is to highlight important or interesting passages. There are several ways to do this. The worst is to use a yellow highlighting marker (or hot pink, or whatever color you like). The main problem with this is that you will tend to find almost every sentence to be important or interesting. As a consequence, every page will become yellow (or hot pink, or whatever). Not only does this defeat the purpose of highlighting—because if everything has been highlighted, then really nothing has been!—but the pages of your text will become damp, curl up, and be generally messy.
This technique can have other problems, too:

A slightly less messy, but equally useless, technique is to use a pen or pencil to underline important or interesting passages. I guarantee that you will wind up underlining every sentence on every page, and you will have gained nothing.
The technique that I suggest is also susceptible to this problem, but has a built-in way to overcome it, so that you can re-read the text, highlighting different passages each time. The trick is to highlight a passage by drawing a vertical line in the margin. I like to use the right margin and to make my line a right square bracket: ]. If you want to make it clear [exactly where the highlighted passage begins or ends,] you can use small square brackets in the text, as I did in this sentence, along with the vertical line in the margin. This way, even if you've slipped into the error of highlighting (i.e., vertical-lining) every sentence on every page, at least you haven't ruined the page. Moreover, when you re-read the text (note that I said 'when', not 'if' :-), you can then use a different highlighting technique (e.g., underlining) to highlight more important passages. Sometimes, I use double brackets in the margin for this second round of highlighting: ]] and underlining for a third round. (If you must, you could use yellow highlighter for a fourth round.)

Tuesday 9 July 2013

Read Slowly


The first step in reading actively is to read s-l-o-w-l-y. Here is an algorithm (i.e., a procedure) for how to read any text, in any subject, slowly and actively:
    WHILE there is a next sentence to read, DO:
      BEGIN { while }
        Read it, SLOWLY;
        IF you do not understand it, THEN
          BEGIN { if }
            re-read the previous material, SLOWLY;
            re-read the incomprehensible sentence, SLOWLY;
            IF you still don't understand it, THEN
               ask a fellow student to explain it;
            IF you still don't understand it, THEN
               ask your Teaching Assistant (TA) to explain it;
            IF you still don't understand it, THEN
               ask me;
            IF you are in an upper-level course & you still don't understand it, THEN
               write a paper about it (!)
          END { if } 
    END; { while }
    
Since there is no next sentence (because the Boolean test in the WHILE is false), you've understood the text!
For those of you who may not be familiar with how to read structured computer programs such as this one, here's how it goes: In a "while" statement, if the initial test is false, then the rest of the statement is not executed. So, if you are at the beginning or the middle of reading a text, there will be a "next" sentence, so you do execute the rest of the statement, which says to read that next sentence slowly, etc. However, if you have finished reading the entire text (and, hopefully, have now understood it), then there is no next sentence, so you are finished! (The words in braces, like "{ while }", are just computer-programming notation for a comment that is intended for human readers of a computer program but that is ignored by the computer.)
This algorithm has three major advantages:
  1. It forces you to actively think about each sentence you read before you go on to read the next one.
  2. It slows you down, so that you don't read past the point at which you don't understand. This is especially important in mathematical and scientific subjects.
  3. It can help you get help from your teacher, because you can show your teacher exactly where you got lost. It is always much better to show your teacher exactly what it is that you don't understand than it is to just say that you don't understand the material.
  4. Note that it also provides you an opportunity to interact with your instructors and fellow students!
How do you know whether you understand what you've read? Easy: After each sentence, ask yourself "Why?" (Pressley & El-Dinary 1992).
For more information on slow reading, see:
  1. Pressley, Michael, & El-Dinary, Pamela Beard (1992), "Memory Strategy Instruction that promotes Good Information Processing", in Douglas J. Herrmann, Herbert Weingartner, Alan Searleman, & Cathy McEvoy (eds.), Memory Improvement: Implications for Memory Theory (New York: Springer-Verlag): 79-100.
  2. Fletcher, Lancelot R. (1994), "Slow Reading Lists (and the Meaning of Slow Reading)"
    • Note: If you scroll down about halfway on the above link, you'll reach the section called "What Do I Mean by "Slow Reading"?".
  3. Hartman, Geoffrey H. (1996), "The Fate of Reading Once More", PMLA (Proceedings of the Modern Language Association) 111(3) (May): 383-389; see especially p. 386.
  4. Daly, Robert (2003), "Slow Reading: Why it Matters, How to Do It, How to Teach It"
  5. Waters, Lindsay (2007), "Time for Reading", Chronicle of Higher Education 53(23) (9 February): B6-B8.
  6. Bauerlein, Mark (2008), "Online Literacy Is a Lesser Kind: Slow Reading Counterbalances Web Skimming", Chronicle of Higher Education 54(31) (19 September): B10-B11.
  7. Blessing, Kimberly A. (2013), "I Re-Read, therefore I Understand", Philosophy Now No. 94 (January/February): 17.
    • "René Descartes' advice on reading philosophy"
      1. "Read through the entire work quickly, as you would a novel.…"
      2. "Read through a second time, paying greater attention…"
      3. "Read through a third time, keeping the questions and problems noted in Step 2 in mind.…"
      4. "If some difficulties still remain, re-read those parts a fourth time.…"
  8. And for information on why speed reading doesn't work, see:

Study Hard Subjects First & Study in a Quiet Place


Study hard subjects first. Each night (or day) when studying or doing your homework, do those subjects first for which you need to be alert and energetic. Leave the easier, or more fun, subjects to later.
Study in a quiet place, with as few distractions as possible. Do not listen to music or TV: It is virtually impossible to do two things at once if one of them is studying. (For the evidence on why it is difficult—if not impossible—to do two things at once (called "multitasking"),




When should you study or do your homework? It's tempting to put off your homework to the last minute. There are at least two good reasons to do your homework as soon as possible and not put it off till the evening, when it's not daylight (although you should certainly take a break between the end of the school day and before starting your homework):
  1. It's better to get it done and over with, and to leave yourself enough time to do it all. If you put it off, you may find that you have an assignment or two that are going to take you a lot longer than you thought they would. If you start early and get your work done before you relax, you'll have enough time for even those hard assignments (even if it means not having enough time to Facebook or play videogames or read for fun). The general principle is: Don't eat your dessert first!
  2. You're more awake during the daytime or after relaxing for, say, an hour or so after classes end, than you will be at the end of the day just before going to sleep.

Humor

Read it completely, and you sure can make the moral of it.





Humor 1

Logic GATE Tutor

Perfect Logic GATE Tutor, plz visit:

http://users.cs.cf.ac.uk/C.L.Mumford/logic/LogicGateTutor.html

Data Structure Algorithm Examples

Enable JAVA in browser and  try out these demos. They are a good way of getting to grips with some of the algorithms involved with the data structures:

The Travelling Salesman Problem: An Introduction

What is the Travelling Salesman Problem?

The idea behind the Travelling Salesman Problem (TSP) is as follows: A salesman has a given tour of a specified number of cities. Starting from any one of these cities, he must make a tour, visiting each of the other cities on the tour only once, with his final destination being his city of departure. This task should be achieved in such a way as to minimise the total distance travelled on the tour.

Everyday Usage

Of course the Problem's most obvious application in everyday life is for that of our travelling salesman, who seeks the best route around a number of places where he has appointments. By finding an efficient route, in theory, he saves time and lowers the costs of his journey. Other everyday applications of this problem is in electrics, where an electrician might need to consider the best way of wiring a large house, or where producers of circuit boards need to work out the most efficient circuiting arrangements on the board.

Symmetrical and Non-symmetrical tours

Tours can be symmetrical or non-symmetrical. A symmetrical tour considers that the distance from city 'a' to city 'b' is the same as that from 'b' to 'a'. A non-symmetrical tour considers that these distances are not the same (think of one-way systems which mean that going from town 'a' to town 'b' is not the exact opposite of going from 'b' to 'a'). For this project, we will consider that the tours are symmetrical - going from Liverpool to Manchester to Glasgow to Liverpool is considered the same as going from Liverpool to Glasgow to Manchester to Liverpool. The shape of these to two tours are basically the same.

Exact Methods

The formula to calculate the number of distinct tour paths for n cities is (n-1)!/2. A four city tour has six possible routes (3 x 2 x 1)/2=3, whilst a five city tour has 12 possible distinct tours (4 x 3 x 2 x 1)/2=12. A problem whose possibilities increase at such a rate rapidly becomes complex. As it would require 60 distinct drawings to represent a six city problem, humans are understandably turning toward the superior calculating speed of the computer to help with the problem. The optimum tour can be found by calculating the total length of each possible route around the cities and choosing the shortest of those. But even the computer begins to struggle at a relatively low number to consider all possible solutions. Finding the optimal route for a thirty city tour would require the calculation of distance of 4.42 x 1030 different possible tours.

Tour Construction Heuristics

The difficulties associated with finding an optimal tour force us to find alternatives. A heuristic is a 'rule-of-thumb', applying a general idea to a specific problem. Although this is unlikely to provide us with the optimal solution, it tends to provide a near-optimal solution and is constructed far more quickly. The heuristics dealt with in this project are: 

Nearest Neighbour - Keep finding the nearest unvisited city to the present city, until all the cities have been visited. 
 The Nearest Neighbour Algorithm works as follows:

1. Choose any city as a starting point. Call this city 'a'.
2. Visit the nearest city to city 'a', which we shall call city 'b'. City 'b' becomes the 'current city'.
3. Visit the nearest city to city 'b' which has not yet been visited - city 'c'. City 'c' is now the 'current city'.
4. As per point 3, repeatedly visit the nearest unvisited city to the current city until all cities have been visited once.
5. Once all cities have been visited once, return from the last city to have been visited to the starting city - city 'a'.

Choose your starting city (shown in green). This will also be the final destination. Find the closest city to your starting city (red).
Draw a line between the two. Now locate the nearest unvisited city to the newly incorporated city of the previous step. Once again,draw a line between the two, then locate the nearest unvisited city to the newly incorporated city of the previous step.
Repeat Step 3/4... ...until only one city remains unvisited...
...and once this is incorporated... ..the only option is to return to the starting city.





Multi-Fragment - Keep adjoining the cities closest together, where neither are already connected to 2 other cities and where adjoining them will not result in a closed mini-tour. Repeat until all cities are directly connected to two other cities. 
 The Multi-Fragment Algorithm works as follows:

1. Consider a distance matrix (such as the type at the back of a road atlas of Britain), containing all the distances between all cities on the tour.
2. Find the shortest distance and link these two cities together.
3. Then, find the second shortest distance, and link these two cities.
4. Now, find the next smallest edge whose cities conform to the following criteria:
a) Both must have an edge degree ratio <=1 (i.e. neither can already be linked to more than one other city)
b) Adding an edge between the cities must not result in a closed tour, which does not already include all the cities on the tour.
5. Repeat until all cities end up being connected to two other cities.

Identify the two closest cities (shown in red). Join the closest cities together. They now have one edge each (signified by grey colouring). Now find the next two closest cities (red).
Join them together. Now find the two closest cities, where joining them will not result in a closed tour, and where each has an edge degree of either one (denoted by grey shading) or zero (transparent circles). If a city has an edge degree of two (shown in black), it cannot be connected to any further cities.
Repeat the previous step...
...until there only remain two cities with an edge degree less than two... ...and these are joined to complete the tour...

Farthest Insertion - After joining the two farthest cities, to make a mini-tour, keep adding the farthest unvisited city from the tour, to make a new mini-tour. Repeat until all cities have been visited. There is also a guide to the Nearest Insertion algorithm, which works in the opposite fashion, adjoining the two closest cities, and repeatedly adding the nearest to the tour. 
 The Farthest Insertion Algorithm works as follows:

1. Find the two cities farthest apart. Let's call them city 'a' and city 'b'. Join them together.
2. City 'c' is the farthest city from the edge between city 'a' and city 'b'. This is incorporated into the tour by creating an edge between city 'c' and each of cities 'a' and 'b', thus creating a triangular mini-tour.
3. The remaining cities are incorporated as follows:
a) Find the farthest city yet to be incorporated, from any point on the mini-tour (city 'd').
b) Note the edge to which this city is closest.
c) Erase this edge, and create two new edges, between city 'd' each of the cities at the ends of the erased closest edge.

Identify the two farthest cities. Join them together to form a mini-tour, then find the farthest city from any point of this mini-tour.
Incorporate this city by drawing a line between it and each of the two cities already on the mini-tour to form a new, three city mini-tour. Now find the farthest city from the new mini-tour. Identify the edge on the mini-tour to which the city is closest (shown in green). Incorporate this city by erasing the closest edge to the city, and creating new edges between each of the cities at the ends of the former closest line, and the nearest city. Once again find the farthest city from this new mini-tour
Repeat the previous step... ...throughout the tour...
...until there remains only one city to be incorporated, and the only option... ...is to incorporate it to complete the tour.

The QuickSort Algorithm

QuickSort is a particularly effective and popular sorting algorithm. It is a divide-and-conquer algorithm, working by splitting the data into two parts and then sorting them recursively. Being recursive, it is often hard to visualise what the algorithm is doing.

Here is the pseudocode algorithm for QuickSort:


 
 quicksort(a[], p, r)
           if r>p then
      j=partition(a[], p, r)
      quicksort(a[], p, j-1)
           quicksort(a[], j+1, r)


      partition(a[], p, r)
          i=p
          j=r+1
          pivot=a[p]
          do { 
               do i=i+1 while (a[i]<pivot)
               do j=j-1 while (a[j]>pivot)
               if (i<j) exchange(a[i], a[j])
             }while (i<j)
          exchange(a[p], a[j])
          return j

The Partition method receives a list or sublist, and places the first element in its correct position within the list. It also ensures that all elements to the left of this are smaller, and all to the right are larger.

The best and average cases of Quicksort take O(nlogn) time.

Go to QuickSort demonstration

Artificial Intelligence Notes by David Marshall