Computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer.
A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexityi.
Other measures of complexity are also used, such as the amount of communication used in communication complexitythe number of gates in a circuit used in circuit complexity and the number of processors used in parallel computing. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.
The P versus NP problemone of the seven Millennium Prize Problemsis dedicated to the field of computational complexity.
Computational complexity theory
Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem.
More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance.
The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem.
For example, consider the problem of primality testing. The instance is a number e. Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem : Is there a route of at most kilometres passing through all of Germany's 15 largest cities? For this reason, complexity theory addresses computational problems and not particular problem instances.Tamal K Dey.
In recent years, the field has undergone particular growth in the area of data analysis. The application of topological techniques to traditional data analysis, which before has mostly developed on a statistical setting, has opened up new opportunities.
This course is intended to cover this aspect of computational topology along with the developments of generic techniques for various topology-centered problems. Objectives Be familiar with basics in topology that are useful for computing with data Master a subset of algorithms for computing: Betti number, topological persistence, homology cycles, Reeb graphs, Laplace spectra from data Be familiar with how to design algorithms for problems in applications dealing with data Be familiar with how to research the background of a topic in data analysis, machine learning Prerequisites: CSE; or grad standing and permission of instructor Materials Transparencies Text and class material.
Computational topology, Herbert Edelsbrunner and John L. Harer, AMS 2. Curve and surface reconstruction: Algorithms with mathematical analysis, Tamal K. Dey, Cambridge U. Press 3. Elements of Algebraic Topology, James R. Munkres, Addison-Wesley 4. Press 5. Class materials and notes posted on this web-site. Topics 1.
Basics of Topology. Maps: homeomorphisms, homotopy equivalence, isotopy [ Notes ].
Manifolds [ Notes ]. Simplicial complexes [Munkres][ Notes ] b. Chech complexes, Vietoris-Rips complexes [ Notes ] c. Witness complexes [ deSilva-Carlsson04 paper][ Notes ] d.Computational Mathematics involves mathematical research in areas of science and engineering where computing plays a central and essential role.
Topics include for example developing accurate and efficient numerical methods for solving physical or biological models, analysis of numerical approximations to differential and integral equations, developing computational tools to better understand data and structure, etc.
Computational mathematics is a field closely connected with a variety of other mathematical branches, as for often times a better mathematical understanding of the problem leads to innovative numerical techniques. Duke's Mathematics Department has a large group of mathematicians whose research involves scientific computing, numerical analysis, machine learning, computational topology, and algorithmic algebraic geometry.
The computational mathematics research of our faculty has applications in data analysis and signal processing, fluid and solid mechanics, electronic structure theory, biological networks, and many other topics. Computational Mathematics. Pankaj K. William K. Professor Emeritus of Mathematics.
Thomas Beale. Paul L Bendich. Associate Research Professor of Mathematics. Robert Bryant. Phillip Griffiths Professor of Mathematics. Robert Calderbank.
Charles S. Sydnor Distinguished Professor of Computer Science. Xiuyuan Cheng. Assistant Professor of Mathematics. Personal website. Ingrid Daubechies. James B. Department of Mathematics Profile.
John Harer. Professor of Mathematics. Gregory Joseph Herschlag. Assistant Research Professor of Mathematics.
Numerical-Computational Model for Nonlinear Analysis of Frames with Semirigid Connection
Jian-Guo Liu. Professor of Physics. Jianfeng Lu. Jonathan Christopher Mattingly.In theoretical computer science and mathematicsthe theory of computation is the branch that deals with what problems can be solved on a model of computationusing an algorithmhow efficiently they can be solved or to what degree e. The field is divided into three major branches: automata theory and formal languagescomputability theoryand computational complexity theorywhich are linked by the question: "What are the fundamental capabilities and limitations of computers?
In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. So in principle, any problem that can be solved decided by a Turing machine can be solved by a computer that has a finite amount of memory. The theory of computation can be considered the creation of models of all kinds in the field of computer science.
Therefore, mathematics and logic are used. In the last century it became an independent academic discipline and was separated from mathematics. Automata theory is the study of abstract machines or more appropriately, abstract 'mathematical' machines or systems and the computational problems that can be solved using these machines.
Theory of computation
These abstract machines are called automata. Automata theory is also closely related to formal language theory,  as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs about computability. Language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet.
It is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several classes of formal languages, each allowing more complex language specification than the one before it, i.
Chomsky hierarchy and each corresponding to a class of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of specification for any problem that must be computed.
Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the halting problem cannot be solved by a Turing machine  is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine.
Much of computability theory builds on the halting problem result. Another important step in computability theory was Rice's theoremwhich states that for all non-trivial properties of partial functions, it is undecidable whether a Turing machine computes a partial function with that property. Computability theory is closely related to the branch of mathematical logic called recursion theorywhich removes the restriction of studying only models of computation which are reducible to the Turing model.
Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps does it take to perform a computation, and how much memory is required to perform that computation.
In order to analyze how much time and space a given algorithm requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking.A numerical-computational model for static analysis of plane frames with semirigid connections and geometric nonlinear behavior is presented.
The algorithm pseudo-code is presented, and the finite element corotational method is used for the discretization of the structures. The equilibrium paths with load and displacement limit points are obtained.
The semirigidity is simulated by a linear connection element of null length, which considers the axial, tangential, and rotational stiffness. Nonlinear analyses of 2D frame structures are carried out with the free Scilab program. Also, the simulations show that the connection flexibility has a strong influence on the nonlinear behavior and stability of the structural systems. Buckling and postbuckling analyses of frames are important in structural design, particularly for slender elastic structures [ 2 ].
The analysis of beams and columns as independent members can lead to erroneous results when considering large displacements [ 3 ]. Connections between members of these structures can affect critical loading and postbuckling behavior [ 4 ]. Usually in steel structure designs, the frames are analyzed with the simplification that the beam-column connection behavior can be idealized by two extreme cases: ideally flexible, where no moment is transmitted between the column and the beam and these elements behave independently, and perfectly rigid, in which the total transmission of the moment occurs [ 5 ].
However, experimental investigations on real structures have pointed out that most connections between structural elements should be treated as semirigid connections and moment-rotation curves are used to describe their behavior [ 6 ].
With the development of informatics and significant computational resources in the last twenty years, many researches about advanced analysis with semirigid connections were published worldwide. A numerical model that includes both nonlinear connection behavior and geometric nonlinearity of the structure was developed by Sekulovic and Salatic [ 7 ]. Pinheiro and Silveira [ 8 ] discussed numerical and computational strategies for nonlinear analysis of frames with semirigid connections.
Zareian and Krawinkler [ 9 ] established a reliability-based approach for addressing the collapse potential of buildings. Lignos and Krawinkler [ 10 ] discussed the development of a database of experimental data of steel components and the use of this database for quantification of important parameters that affect the cyclic moment-rotation relationship at plastic hinge regions in beams. Nguyen and Kim [ 11 ] developed a numerical procedure based on the beam-column method for nonlinear elastic dynamic analysis of three-dimensional semirigid steel frames.
Rungamornrat et al. To fully understand the seismic performances and failure modes of beam-column joints in RC buildings, a simplified analytical model of joint behavior was proposed by Bossio et al. A numerical approach based on the member discrete element method to investigate static and dynamic responses of steel structures with semirigid joints was presented by Ye and Xu [ 14 ].
Fernandes et al. The performance of base-isolated steel structures having special moment frames was assessed by Rezaei Rad and Banazadeh [ 16 ]. Alvarenga [ 17 ] proposed a new numerical formulation that includes the behavior of the semirigid connection into plastic-zone finite element model for steel plane frames. An efficient methodology for solving the nonlinear system must be able to trace the complete equilibrium path and identify and pass through all limits or critical points of the structural system under analysis [ 18 — 21 ].
The nonlinear response of a structural system can be seen in Figure 1in which a given displacement component may increase or decrease along the path. In this figure, load limit points A, Ddisplacement limit points B, Cand point of failure E are identified [ 20 ]. Considerable attention has been devoted to the problem of tracing the response of semirigid frame connections in the presence of geometric nonlinearity.Computational sciencealso known as scientific computing or scientific computation SCis a rapidly growing field that uses advanced computing capabilities to understand and solve complex problems.
It is an area of science which spans many disciplines, but at its core, it involves the development of models and simulations to understand natural systems.
In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve problems in various scientific disciplines. The field is different from theory and laboratory experiment which are the traditional forms of science and engineering. The scientific computing approach is to gain understanding, mainly through the analysis of mathematical models implemented on computers. Scientists and engineers develop computer programsapplication softwarethat model systems being studied and run these programs with various sets of input parameters.
In some cases, these models require massive amounts of calculations usually floating-point and are often executed on supercomputers or distributed computing platforms. Actually the science which deals with the Computer Modeling and Simulation of any physical objects and phenomena by high programming language and software and hardware is known as Computer Simulation.
The term computational scientist is used to describe someone skilled in scientific computing. This person is usually a scientist, an engineer or an applied mathematician who applies high-performance computing in different ways to advance the state-of-the-art in their respective applied disciplines in physics, chemistry or engineering.
In fact, substantial effort in computational sciences has been devoted to the development of algorithms, the efficient implementation in programming languages, and validation of computational results.
A collection of problems and solutions in computational science can be found in Steeb, Hardy, Hardy and Stoop Philosophers of science addressed the question to what degree computational science qualifies as science, among them Humphreys  and Gelfert. Tolk  uses these insights to show the epistemological constraints of computer-based simulation research.
As computational science uses mathematical models representing the underlying theory in executable form, in essence, they apply modeling theory building and simulation implementation and execution. While simulation and computational science are our most sophisticated way to express our knowledge and understanding, they also come with all constraints and limits already known for computational solutions. Predictive computational science is a scientific discipline concerned with the formulation, calibration, numerical solution and validation of mathematical models designed to predict specific aspects of physical events, given initial and boundary conditions and a set of characterizing parameters and associated uncertainties.
Inover half the world's population live in cities. This urban growth is focused in the urban populations of developing countries where city dwellers will more than double, increasing from 2. Cities are massive complex systems created by humans, made up of humans and governed by humans. Trying to predict, understand and somehow shape the development of cities in the future requires complex thinking, and requires computational models and simulations to help mitigate challenges and possible disasters.
The focus of research in urban complex systems is, through modeling and simulation, to build a greater understanding of city dynamics and help prepare for the coming urbanisation.
In today's financial markets huge volumes of interdependent assets are traded by a large number of interacting market participants in different locations and time zones.
Their behavior is of unprecedented complexity and the characterization and measurement of the risk inherent to these highly diverse set of instruments is typically based on complicated mathematical and computational models.CTNT 2018 - \
Solving these models exactly in closed form, even at a single instrument level, is typically not possible, and therefore we have to look for efficient numerical algorithms. This has become even more urgent and complex recently, as the credit crisis has clearly demonstrated the role of cascading effects going from single instruments through portfolios of single institutions to even the interconnected trading network.
Understanding this requires a multi-scale and holistic approach where interdependent risk factors such as market, credit and liquidity risk are modelled simultaneously and at different interconnected scales. Exciting new developments in biotechnology are now revolutionizing biology and biomedical research. Examples of these techniques are high-throughput sequencinghigh-throughput quantitative PCRintra-cellular imaging, in-situ hybridization of gene expression, three-dimensional imaging techniques like Light Sheet Fluorescence Microscopy and Optical Projectionmicro - Computer Tomography.
Given the massive amounts of complicated data that is generated by these techniques, their meaningful interpretation, and even their storage, form major challenges calling for new approaches. Going beyond current bioinformatics approaches, computational biology needs to develop new methods to discover meaningful patterns in these large data sets. Model-based reconstruction of gene networks can be used to organize the gene expression data in a systematic way and to guide future data collection. A major challenge here is to understand how gene regulation is controlling fundamental biological processes like biomineralisation and embryogenesis.
The sub-processes like gene regulationorganic molecules interacting with the mineral deposition process, cellular processesphysiology and other processes at the tissue and environmental levels are linked.
Rather than being directed by a central control mechanism, biomineralisation and embryogenesis can be viewed as an emergent behavior resulting from a complex system in which several sub-processes on very different temporal and spatial scales ranging from nanometer and nanoseconds to meters and years are connected into a multi-scale system.The Journal of Mathematical Sciences and Computational Mathematics JMSCM is a peer reviewed, international journal which promptly publishes original research papers, reviews and technical notes in the field of Pure and Applied Mathematics and Computing.
It focuses on theories and applications on mathematical and computational methods with their developments and applications in Engineering, Technology, Finance, Fluid and Solid Mechanics, Life Sciences and Statistics. The spectrum of the Journal is wide and available to exchange and enrich the learning experience among different strata from all corners of the society.
The contributions will demonstrate interaction between various disciplines such as mathematical sciences, engineering sciences, numerical and computational sciences, physical and life sciences, management sciences and technological innovations. The Journal aims to become a recognized forum attracting motivated researchers from both the academic and industrial communities. The Journal is devoted towards advancement of knowledge and innovative ideas in the present world.
Aims and Scope The spectrum of the Journal is wide and available to exchange and enrich the learning experience among different strata from all corners of the society.