Main topic of week 12: pumping lemma, context free grammer
Since not all the expressions are regular, ie, string with same number 1s and 0s, so the idea of having pumping lemma is to prove some expression that is regular. The main scheme of pumping lemma is that if you got a DFSA denotes L has p states, then every string has length greater than p can be written as uv^kw where v is not empty. Idea behind this is that somewhere in the string will generate a loop back to some states, since the total string is greater than the total number of states. So we could repeat that cycle K times, and the string uv^kw will still be accepted by L.
2008年11月25日星期二
Week 11
Main topic of week 11: NFSA, equivalence of DFSA AND NFSA, cartesian product
A very closed definition to DFSA, the only differece I see is in NFSA an empty transition is allowed, and transition function are now sending from state to set of states. In summary, the DFSA just a special case of NFSA where empty transition and multiple state transition is not allowed.
DFSA and NFSA are equivalent and Every regular expression can be expressed by using DFSA or NFSA. To change a NFSA back to DFSA, we need some additional symbols.hatQ(all subset of the Q) HagSigma(same as Sigma), HatDelta(transition function that transfer all the state that current state can reach by a single transition. HatS(set of starting states) and HatF(non empty Final states).
cartesian product is used when we need to design a machine that has more than one independent requirement. For example, a string with odd length and odd number of 1s, then we could use two DFSA to keep track of the length of the string , and the number of 1s in the string. Or we could use cartesian product, to produce a machine with 4 states(keep track the lenght and 1s at the same time).
At last we learnt how to convert a FSA back to regular expressions. By eliminating one state at a time.
A very closed definition to DFSA, the only differece I see is in NFSA an empty transition is allowed, and transition function are now sending from state to set of states. In summary, the DFSA just a special case of NFSA where empty transition and multiple state transition is not allowed.
DFSA and NFSA are equivalent and Every regular expression can be expressed by using DFSA or NFSA. To change a NFSA back to DFSA, we need some additional symbols.hatQ(all subset of the Q) HagSigma(same as Sigma), HatDelta(transition function that transfer all the state that current state can reach by a single transition. HatS(set of starting states) and HatF(non empty Final states).
cartesian product is used when we need to design a machine that has more than one independent requirement. For example, a string with odd length and odd number of 1s, then we could use two DFSA to keep track of the length of the string , and the number of 1s in the string. Or we could use cartesian product, to produce a machine with 4 states(keep track the lenght and 1s at the same time).
At last we learnt how to convert a FSA back to regular expressions. By eliminating one state at a time.
2008年11月18日星期二
Week 10
Main topic of week 10: equivalence of the regular expression and DFSA
Additional problem of proving two regular expression are equivlent, same as to prove the equivalent of regular expreesion and laguages.
Deterministic Finite State Automata is a powerful tool to examine whether a specific string satify a given formula. Some general attribute of DFSA containing Q(states), sigma(alphabet), delta(transision function) and s, and F. Extended transition function delta* take a string to a state, rather than a single character. DFSA may reach some state that don't have link to other states. Such state is called dead end, which can be eliminated out when drawing a DFSA diagram.
Additional problem of proving two regular expression are equivlent, same as to prove the equivalent of regular expreesion and laguages.
Deterministic Finite State Automata is a powerful tool to examine whether a specific string satify a given formula. Some general attribute of DFSA containing Q(states), sigma(alphabet), delta(transision function) and s, and F. Extended transition function delta* take a string to a state, rather than a single character. DFSA may reach some state that don't have link to other states. Such state is called dead end, which can be eliminated out when drawing a DFSA diagram.
2008年11月11日星期二
Week 9
Main topic of week 9: language and regex
A brand new material starts here and of course, a lot of definitions need to remember in this week. These inclued Strings, alphabet, length, and languages. Language is a set of strings over some alphabet. It has some operations such as intersection , union, kleene star and complement.
Regular expresion are used to denote a set of language in a special structure. It is also defined in a recursive way. Which has base cases single character, empty string and empty set, and if R and S are defined regular expressions, so do R+S, RS and R*. Language which can be denoted by regular expression are defined in the same sort. Base cases are include language that can be denoted by regular expression of empty string, single charcter and empty language, and if language one and two are defined by some regular expression, the language of union and concatenation and kleene star will also be defined in the regular expression.
More example of the regular expressions are provided here, with a little alrithmetic operation we could find some regular expression are the same easily.
To prove a language denoted by regular expression is equivalent to other language, is by proving mutually inclusive, that is, the expression is a subset of the other language A, and A also can be denoted by this regular expression.
A brand new material starts here and of course, a lot of definitions need to remember in this week. These inclued Strings, alphabet, length, and languages. Language is a set of strings over some alphabet. It has some operations such as intersection , union, kleene star and complement.
Regular expresion are used to denote a set of language in a special structure. It is also defined in a recursive way. Which has base cases single character, empty string and empty set, and if R and S are defined regular expressions, so do R+S, RS and R*. Language which can be denoted by regular expression are defined in the same sort. Base cases are include language that can be denoted by regular expression of empty string, single charcter and empty language, and if language one and two are defined by some regular expression, the language of union and concatenation and kleene star will also be defined in the regular expression.
More example of the regular expressions are provided here, with a little alrithmetic operation we could find some regular expression are the same easily.
To prove a language denoted by regular expression is equivalent to other language, is by proving mutually inclusive, that is, the expression is a subset of the other language A, and A also can be denoted by this regular expression.
2008年11月4日星期二
Week 8
Main topic of week 8: named repetition and prove of iteration
As disscussed previously, recursion is an extension of naming repetion process. A recursive function is formed by providing base case, and generating body. Iteration works in the simailary way, but with little variation. Unlike recursive function, iteration will have an end point somewhere, or it provided infinite loop and yield false result. Thus when proving the correctness of a iteration function, we need first to find the loop invariant that goes toward the final result, and a termination that garentee the function will end. Termination sometimes is not very easy to find, and definitely require some thinking on it.
As disscussed previously, recursion is an extension of naming repetion process. A recursive function is formed by providing base case, and generating body. Iteration works in the simailary way, but with little variation. Unlike recursive function, iteration will have an end point somewhere, or it provided infinite loop and yield false result. Thus when proving the correctness of a iteration function, we need first to find the loop invariant that goes toward the final result, and a termination that garentee the function will end. Termination sometimes is not very easy to find, and definitely require some thinking on it.
2008年10月27日星期一
Week 7
Main topic of week 7: function correctness Definition
A function is being correct if when precondition holds, after executing the function, post condition follows. So, in order to prove a function is correct, we generally assume the precondition is true, and use induction to exam each step what happend there. Gerneralize all the result and compare it with the post condition. When proving it, we usually divide our proof into different cases, and form them we derive the postcondition.
A function is being correct if when precondition holds, after executing the function, post condition follows. So, in order to prove a function is correct, we generally assume the precondition is true, and use induction to exam each step what happend there. Gerneralize all the result and compare it with the post condition. When proving it, we usually divide our proof into different cases, and form them we derive the postcondition.
2008年10月20日星期一
Week 6
Main topic of week 6: complexity analysis As introduced in week 4.
Function complexity is the sum of the time complexity in each child function. Obvioulsy, "child function" makes sense when the function calls himself somewhere, meanning the function is recursive. Then the function's time complexity may be based on quantities like length of the list or string. If the child function calling himself with shorter or longer length of the list or string, then it makes some progress towards the base, and we can design our recusive time complexity T(n) based on the length. One thing worth to mention here is that the complexity analysis is based on the worest case scenario, which will simplify the caculation quiet a bit.
Idea how to prove the "uncontinous" recursive function. Because, for example, the length of a list in a recursive function may not stricly decrease or increase by one in each child function call. In the mergeSort(), we find the length of the list to be considered in child function is half to the length of the list in it's parent function. That will make our Time complexity function in a series of discrete points n. So in order to prove it, we could first prove the list of length n = 2^k(in mergeSort() example), thus we know k is decreased by one during each subfunction call. And then prove the function is monotonic, meaning either non-increasing, or non-decreasing. Finally we prove all the possible length lies between each 2^k and 2^k+1 length list.
After finding the time complexity function, we can bounded function by some other functions. Thus big O and big omega is introduced here agian.
Function complexity is the sum of the time complexity in each child function. Obvioulsy, "child function" makes sense when the function calls himself somewhere, meanning the function is recursive. Then the function's time complexity may be based on quantities like length of the list or string. If the child function calling himself with shorter or longer length of the list or string, then it makes some progress towards the base, and we can design our recusive time complexity T(n) based on the length. One thing worth to mention here is that the complexity analysis is based on the worest case scenario, which will simplify the caculation quiet a bit.
Idea how to prove the "uncontinous" recursive function. Because, for example, the length of a list in a recursive function may not stricly decrease or increase by one in each child function call. In the mergeSort(), we find the length of the list to be considered in child function is half to the length of the list in it's parent function. That will make our Time complexity function in a series of discrete points n. So in order to prove it, we could first prove the list of length n = 2^k(in mergeSort() example), thus we know k is decreased by one during each subfunction call. And then prove the function is monotonic, meaning either non-increasing, or non-decreasing. Finally we prove all the possible length lies between each 2^k and 2^k+1 length list.
After finding the time complexity function, we can bounded function by some other functions. Thus big O and big omega is introduced here agian.
订阅:
博文 (Atom)