We show the practicality of two existing meta-learning algorithms Model-</span></span><span><span><span> </span></span></span><span><span><span><spa...We show the practicality of two existing meta-learning algorithms Model-</span></span><span><span><span> </span></span></span><span><span><span><span style="font-family:Verdana;">Agnostic Meta-Learning and Fast Context Adaptation Via Meta-learning using an evolutionary strategy for parameter optimization, as well as propose two novel quantum adaptations of those algorithms using continuous quantum neural networks, for learning to trade portfolios of stocks on the stock market. The goal of meta-learning is to train a model on a variety of tasks, such that it can solve new learning tasks using only a small number of training samples. In our classical approach, we trained our meta-learning models on a variety of portfolios that contained 5 randomly sampled Consumer Cyclical stocks from a pool of 60. In our quantum approach, we trained our </span><span style="font-family:Verdana;">quantum meta-learning models on a simulated quantum computer with</span><span style="font-family:Verdana;"> portfolios containing 2 randomly sampled Consumer Cyclical stocks. Our findings suggest that both classical models could learn a new portfolio with 0.01% of the number of training samples to learn the original portfolios and can achieve a comparable performance within 0.1% Return on Investment of the Buy and Hold strategy. We also show that our much smaller quantum meta-learned models with only 60 model parameters and 25 training epochs </span><span style="font-family:Verdana;">have a similar learning pattern to our much larger classical meta-learned</span><span style="font-family:Verdana;"> models that have over 250,000 model parameters and 2500 training epochs. Given these findings</span></span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">,</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> we also discuss the benefits of scaling up our experiments from a simulated quantum computer to a展开更多
It is demonstrated that the recently introduced semantic intelligence spontaneously maintains bounded logical and quantal error on each and every semantic trajectory, unlike its algorithmic counterpart which is not ab...It is demonstrated that the recently introduced semantic intelligence spontaneously maintains bounded logical and quantal error on each and every semantic trajectory, unlike its algorithmic counterpart which is not able to. This result verifies the conclusion about the assignment of equal evolutionary value to the motion on the set of all the semantic trajectories sharing the same homeostatic pattern. The evolutionary value of permanent and spontaneous maintenance of boundedness of logical and quantal error on each and every semantic trajectory is to make available spontaneous maintenance of the notion of a kind intact in the long run.展开更多
Heat-bath algorithmic cooling(HBAC)has been proven to be a powerful and effective method for obtaining high polarization of the target system.Its cooling upper bound has been recently found using a specific algorithm,...Heat-bath algorithmic cooling(HBAC)has been proven to be a powerful and effective method for obtaining high polarization of the target system.Its cooling upper bound has been recently found using a specific algorithm,the partner pairing algorithm(PPAHBAC).It has been shown that by including cross-relaxation,it is possible to surpass the cooling bounds.Herein,by combining cross-relaxation and decoherence-free subspace,we present a two-qubit reset sequence and then generate a new algorithmic cooling(AC)technique using irreversible polarization compression to further surpass the bound.The proposed two-qubit reset sequence can prepare one of the two qubits to four times the polarization of a single-qubit reset operation in PPA-HBAC for low polarization.When the qubit number is large,the cooling limit of the proposed AC is approximately five times as high as the PPA-HBAC.The results reveal that cross-relaxation and decoherence-free subspace are promising resources to create new AC for higher polarization.展开更多
It is well recognized the convenience of converting the linearly constrained convex optimization problems to a monotone variational inequality.Recently,we have proposed a unified algorithmic framework which can guide ...It is well recognized the convenience of converting the linearly constrained convex optimization problems to a monotone variational inequality.Recently,we have proposed a unified algorithmic framework which can guide us to construct the solution methods for solving these monotone variational inequalities.In this work,we revisit two full Jacobian decomposition of the augmented Lagrangian methods for separable convex programming which we have studied a few years ago.In particular,exploiting this framework,we are able to give a very clear and elementary proof of the convergence of these solution methods.展开更多
In recent years,the booming of the Bike Sharing System(BSS)has played an important role in offering a convenient means of public transport.The BSS is also viewed as a solution to the first/last mile connection issue i...In recent years,the booming of the Bike Sharing System(BSS)has played an important role in offering a convenient means of public transport.The BSS is also viewed as a solution to the first/last mile connection issue in urban cities.The BSS can be classified into dock and dock-less.However,due to imbalance in bike usage over spatial and temporal domains,stations in the BSS may exhibit overflow(full stations)or underflow(empty stations).In this paper,we will take a holistic view of the BSS design by examining the following four components:system design,system prediction,system balancing,and trip advisor.We will focus on system balancing,addressing the issue of overflow/underflow.We will look at two main methods of bike re-balancing:with trucks and with workers.Discussion on the other three components that are related to system balancing will also be given.Specifically,we will study various algorithmic solutions with the availability of data in spacial and temporal domains.Finally,we will discuss several key challenges and opportunities of the BSS design and applications as well as the future of dock and dock-less BSS in a bigger setting of the transportation system.展开更多
Many Beijing Siheyuan,a type of Chinese vernacular housing with significant cultural value,have been lost in recent years.Preserving the few remaining has become a necessity,but many contemporary architects lack an un...Many Beijing Siheyuan,a type of Chinese vernacular housing with significant cultural value,have been lost in recent years.Preserving the few remaining has become a necessity,but many contemporary architects lack an understanding of their design principles.Based on a historical analysis deriving from Fengshui theory,the Gongchens Zuofa Zeli ancient construction manual,and craftsmen's experience,this paper describes a parametric algorithm capable of producing Siheyuan variants within a 4D CAD environment which by transforming the original design principles into an algorithm contributes to an understanding of Siheyuan typology and their preservation.This algorithm was implemented in a virtual scripting environment to generate accurate virtual counterparts of historical orextant Siheyuan houses revealing the tacit computational rules underlying traditional Chinese architecture.展开更多
Computation-based approaches in design have emerged in the last decades and rapidly became popular among architects and other designers.Design professionals and researchers adopted different terminologies to address t...Computation-based approaches in design have emerged in the last decades and rapidly became popular among architects and other designers.Design professionals and researchers adopted different terminologies to address these approaches.However,some terms are used ambiguously and inconsistently,and different terms are commonly used to express the same concept.This paper discusses computational design(CD)and proposes an improved and sound taxonomy for a set of key CD terms,namely,parametric,generative,and algorithmic design,based on an extensive literature review from which different definitions by various authors were collected,analyzed,and compared.展开更多
The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a ve...The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a very much essential task. This paper presents a systematic review of different models used for software cost estimation which includes algorithmic methods, non-algorithmic methods and learning-oriented methods. The models considered in this review include both the traditional and the recent approaches for software cost estimation. The main objective of this paper is to provide an overview of software cost estimation models and summarize their strengths, weakness, accuracy, amount of data needed, and validation techniques used. Our findings show, in general, neural network based models outperforms other cost estimation techniques. However, no one technique fits every problem and we recommend practitioners to search for the model that best fit their needs.展开更多
A novel notion of self-organization whose major property is that it brings about the execution of semantic intelligence as spontaneous physico-chemical processes in an unspecified ever-changing non-uniform environment...A novel notion of self-organization whose major property is that it brings about the execution of semantic intelligence as spontaneous physico-chemical processes in an unspecified ever-changing non-uniform environment is introduced. Its greatest advantage is that the covariance of causality encapsulated in any piece of semantic intelligence is provided with a great diversity of its individuality viewed as the properties of the current response and its reproducibility viewed as causality encapsulated in any of the homeostatic patterns. Alongside, the consistency of the functional metrics, which is always Euclidean, with any metrics of the space-time renders the proposed notion of self-organization ubiquitously available.展开更多
The proposed design methodology combines data analysis, algorithmic design, linguistics, mathematics physics to analyze methodologically the complex system of the urban fabric. The implementation of this design mechan...The proposed design methodology combines data analysis, algorithmic design, linguistics, mathematics physics to analyze methodologically the complex system of the urban fabric. The implementation of this design mechanism uses the urban fabric of Eleusis as a paradigm. The urban body of Eleusis is defined as a palimpsest of descriptions and images, synthesized spatially in multiple layers and is therefore considered to be a suitable sample for the preliminary application of the proposed methodology. Urban reality can be described as a complex system as it consists of both tangible and intangible elements, the characteristics of which are quantified by the context and the logical descriptions in which they are incorporated. However, descriptive logic changes constantly, following the development of multiplicity and the extension of concepts. Therefore, the body of urban reality is redefined continuously following the change in both logical descriptions and contexts. Considering that each framework draws up an ideology and chooses to analyze its’ core meaning, a linguistic analysis tool is developed for the combination of data visualization analysis, linguistic, and design methodologies to parameterize description of logics. Written speech is transformed into networks visualizing their ontological relationships. This creates a nebula of data (cluster) that assembles the immaterial reality of the urban fabric which is then transformed into local parameters by the appropriate methodological processes giving the relation between the nebula of descriptions and the locality of the network of descriptions. The material reality of the urban fabric is described through analyzing its’ grid transformations. The infinite body1 The objective of this research focuses on the construction of reality. The term reality is linked to pragmatism and derives from the ancient Greek word “pragma” and refers to anything that can be revealed through our senses of reality is synthesized through the composition of its’ material and 展开更多
This paper is focusing on the application of algorithmic modelling techniques to represent,design and fabricate gravitational lens effects(as described by the astrophysical theory of the'Dark Matter’)in form of a...This paper is focusing on the application of algorithmic modelling techniques to represent,design and fabricate gravitational lens effects(as described by the astrophysical theory of the'Dark Matter’)in form of a garden pavilion for the Royal Horticultural Society’s,Chelsea Flower Show.In addition,this research-led project is exploring the challenges occurring in the use of three-dimensional CNC bending technologies.This is a research by design project and its method is based on a design framework,which incorporates a generative algorithm linked to feedback loops related to parameters such as laws of gravity,plot dimensions,materiality,positioning of the plants,construction and fabrication,requirements and cost as well as the overall aesthetics.Its findings are highlighting accomplishments and failures of a file to factory design and fabrication process,which incorporates algorithmic modelling and digital manufacturing techniques in a collaborative environment.The'Dark Matter Garden’installation was awarded the gold medal for'Best Fresh Garden’by the Royal Horticultural Society in 2015.展开更多
Reinforcement learning provides a cognitive science perspective to behavior and sequential decision making providedthat reinforcement learning algorithms introduce a computational concept of agency to the learning pro...Reinforcement learning provides a cognitive science perspective to behavior and sequential decision making providedthat reinforcement learning algorithms introduce a computational concept of agency to the learning problem.Hence it addresses an abstract class of problems that can be characterized as follows: An algorithm confronted withinformation from an unknown environment is supposed to find step wise an optimal way to behave based only on somesparse, delayed or noisy feedback from some environment, that changes according to the algorithm’s behavior. Hencereinforcement learning offers an abstraction to the problem of goal-directed learning from interaction. The paper offersan opinionated introduction in the algorithmic advantages and drawbacks of several algorithmic approaches to providealgorithmic design options.展开更多
The algorithmic tangent modulus at finite strains in current configuration plays an important role in the nonlinear finite element method. In this work, the exact tensorial forms of the algorithmic tangent modulus at ...The algorithmic tangent modulus at finite strains in current configuration plays an important role in the nonlinear finite element method. In this work, the exact tensorial forms of the algorithmic tangent modulus at finite strains are derived in the principal space and their corresponding matrix expressions are also presented. The algorithmic tangent modulus consists of two terms. The first term depends on a specific yield surface, while the second term is independent of the specific yield surface. The elastoplastic matrix in the principal space associated with the specific yield surface is derived by the logarithmic strains in terms of the local multiplicative decomposition. The Drucker-Prager yield function of elastoplastic material is used as a numerical example to verify the present algorithmic tangent modulus at finite strains.展开更多
Background: Discrete clinical and pathological subtypes of Alzheimer’s disease (AD) with variable presentations and rates of progression are well known. These subtypes may have specific patterns of regional brain atr...Background: Discrete clinical and pathological subtypes of Alzheimer’s disease (AD) with variable presentations and rates of progression are well known. These subtypes may have specific patterns of regional brain atrophy, which are identifiable on MRI scans. Methods: To examine distinct regions which had distinct underlying patterns of cortical atrophy, factor analytic techniques applied to structural MRI volumetric data from cognitively normal (CN) (n = 202), amnestic mild cognitive impairment (aMCI) (n = 333) or mild AD (n = 146) subjects, in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database was applied. This revealed the existence of two neocortical (NeoC-1 and NeoC-2), and a limbic cluster of atrophic brain regions. The frequency and clinical correlates of these regional patterns of atrophy were evaluated among the three diagnostic groups, and the rates of progression from aMCI to AD, over 24 months were evaluated. Results: Discernable patterns of regional atrophy were observed in about 29% of CN, 55% of aMCI and 83% of AD subjects. Heterogeneity in clinical presentation and APOE ε4 frequency were associated with regional patterns of atrophy on MRI scans. The most rapid progression rates to dementia among aMCI subjects (n = 224), over a 24-month period, were in those with NeoC-1 regional impairment (68.2%), followed by the Limbic regional impairment (48.8%). The same pattern of results was observed when only aMCI amyloid positive subjects were examined. Conclusions: The neuroimaging results closely parallel findings described recently among AD patients with the hippocampal sparing and limbic subtypes of AD neuropathology at autopsy. We conclude that NeoC-1, Limbic and other patterns of MRI atrophy may be useful markers for predicting the rate of progression of aMCI to AD and could have utility selecting individuals at higher risk for progression in clinical trials.展开更多
The paper is focused on available server management in Internet connected network environments. The local backup servers are hooked up by LAN and replace broken main server immediately and several different types of b...The paper is focused on available server management in Internet connected network environments. The local backup servers are hooked up by LAN and replace broken main server immediately and several different types of backup servers are also considered. The remote backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network. A Virtual Private Network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The remote backup servers also replace broken main severs immediately under the different conditions with local backups. When the system performs a mandatory routine maintenance of main and local backup servers, auxiliary servers from other location are being used for backups during idle periods. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems. The operational workflow give the guidelines for the actual implementations.展开更多
We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algori...We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.展开更多
Binary Decision Diagrams (BDDs) can be graphically manipulated to reduce the number of nodes and hence the area. In this context, ordering of BDDs play a major role. Most of the algorithms for input variable ordering ...Binary Decision Diagrams (BDDs) can be graphically manipulated to reduce the number of nodes and hence the area. In this context, ordering of BDDs play a major role. Most of the algorithms for input variable ordering of OBDD focus primarily on area minimization. However, suitable input variable ordering helps in minimizing the power consumption also. In this particular work, we have proposed two algorithms namely, a genetic algorithm based technique and a branch and bound algorithm to find an optimal input variable order. Of course, the node reordering is taken care of by the standard BDD package buddy-2.4. Moreover, we have evaluated the performances of the proposed algorithms by running an exhaustive search program. Experi-mental results show a substantial saving in area and power. We have also compared our techniques with other state-of-art techniques of variable ordering for OBDDs and found to give superior results.展开更多
This article describes the development of an application for generating tonal melodies. The goal of the project is to ascertain our current understanding of tonal music by means of algorithmic music generation. The me...This article describes the development of an application for generating tonal melodies. The goal of the project is to ascertain our current understanding of tonal music by means of algorithmic music generation. The method followed consists of four stages: 1) selection of music-theoretical insights, 2) translation of these insights into a set of principles, 3) conversion of the principles into a computational model having the form of an algorithm for music generation, 4) testing the “music” generated by the algorithm to evaluate the adequacy of the model. As an example, the method is implemented in Melody Generator, an algorithm for generating tonal melodies. The program has a structure suited for generating, displaying, playing and storing melodies, functions which are all accessible via a dedicated interface. The actual generation of melodies, is based in part on constraints imposed by the tonal context, i.e. by meter and key, the settings of which are controlled by means of parameters on the interface. For another part, it is based upon a set of construction principles including the notion of a hierarchical organization, and the idea that melodies consist of a skeleton that may be elaborated in various ways. After these aspects were implemented as specific sub-algorithms, the device produces simple but well-structured tonal melodies.展开更多
文摘We show the practicality of two existing meta-learning algorithms Model-</span></span><span><span><span> </span></span></span><span><span><span><span style="font-family:Verdana;">Agnostic Meta-Learning and Fast Context Adaptation Via Meta-learning using an evolutionary strategy for parameter optimization, as well as propose two novel quantum adaptations of those algorithms using continuous quantum neural networks, for learning to trade portfolios of stocks on the stock market. The goal of meta-learning is to train a model on a variety of tasks, such that it can solve new learning tasks using only a small number of training samples. In our classical approach, we trained our meta-learning models on a variety of portfolios that contained 5 randomly sampled Consumer Cyclical stocks from a pool of 60. In our quantum approach, we trained our </span><span style="font-family:Verdana;">quantum meta-learning models on a simulated quantum computer with</span><span style="font-family:Verdana;"> portfolios containing 2 randomly sampled Consumer Cyclical stocks. Our findings suggest that both classical models could learn a new portfolio with 0.01% of the number of training samples to learn the original portfolios and can achieve a comparable performance within 0.1% Return on Investment of the Buy and Hold strategy. We also show that our much smaller quantum meta-learned models with only 60 model parameters and 25 training epochs </span><span style="font-family:Verdana;">have a similar learning pattern to our much larger classical meta-learned</span><span style="font-family:Verdana;"> models that have over 250,000 model parameters and 2500 training epochs. Given these findings</span></span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">,</span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;"> we also discuss the benefits of scaling up our experiments from a simulated quantum computer to a
文摘It is demonstrated that the recently introduced semantic intelligence spontaneously maintains bounded logical and quantal error on each and every semantic trajectory, unlike its algorithmic counterpart which is not able to. This result verifies the conclusion about the assignment of equal evolutionary value to the motion on the set of all the semantic trajectories sharing the same homeostatic pattern. The evolutionary value of permanent and spontaneous maintenance of boundedness of logical and quantal error on each and every semantic trajectory is to make available spontaneous maintenance of the notion of a kind intact in the long run.
基金supported by the National Key Research and Development Program of China (Grant No. 2018YFA0306600)the National Natural Science Foundation of China (Grant Nos. 11905184, and 11605153)+1 种基金the Natural Science Foundation of Zhejiang Province (Grant No. LQ19A050001)the Anhui Initiative in Quantum Information Technologies (Grant No. AHY050000)
文摘Heat-bath algorithmic cooling(HBAC)has been proven to be a powerful and effective method for obtaining high polarization of the target system.Its cooling upper bound has been recently found using a specific algorithm,the partner pairing algorithm(PPAHBAC).It has been shown that by including cross-relaxation,it is possible to surpass the cooling bounds.Herein,by combining cross-relaxation and decoherence-free subspace,we present a two-qubit reset sequence and then generate a new algorithmic cooling(AC)technique using irreversible polarization compression to further surpass the bound.The proposed two-qubit reset sequence can prepare one of the two qubits to four times the polarization of a single-qubit reset operation in PPA-HBAC for low polarization.When the qubit number is large,the cooling limit of the proposed AC is approximately five times as high as the PPA-HBAC.The results reveal that cross-relaxation and decoherence-free subspace are promising resources to create new AC for higher polarization.
基金The author was supported by the NSFC Grant No.11871029.
文摘It is well recognized the convenience of converting the linearly constrained convex optimization problems to a monotone variational inequality.Recently,we have proposed a unified algorithmic framework which can guide us to construct the solution methods for solving these monotone variational inequalities.In this work,we revisit two full Jacobian decomposition of the augmented Lagrangian methods for separable convex programming which we have studied a few years ago.In particular,exploiting this framework,we are able to give a very clear and elementary proof of the convergence of these solution methods.
文摘In recent years,the booming of the Bike Sharing System(BSS)has played an important role in offering a convenient means of public transport.The BSS is also viewed as a solution to the first/last mile connection issue in urban cities.The BSS can be classified into dock and dock-less.However,due to imbalance in bike usage over spatial and temporal domains,stations in the BSS may exhibit overflow(full stations)or underflow(empty stations).In this paper,we will take a holistic view of the BSS design by examining the following four components:system design,system prediction,system balancing,and trip advisor.We will focus on system balancing,addressing the issue of overflow/underflow.We will look at two main methods of bike re-balancing:with trucks and with workers.Discussion on the other three components that are related to system balancing will also be given.Specifically,we will study various algorithmic solutions with the availability of data in spacial and temporal domains.Finally,we will discuss several key challenges and opportunities of the BSS design and applications as well as the future of dock and dock-less BSS in a bigger setting of the transportation system.
基金supported by the funding from The China Scholarship Council(No.201708510109).
文摘Many Beijing Siheyuan,a type of Chinese vernacular housing with significant cultural value,have been lost in recent years.Preserving the few remaining has become a necessity,but many contemporary architects lack an understanding of their design principles.Based on a historical analysis deriving from Fengshui theory,the Gongchens Zuofa Zeli ancient construction manual,and craftsmen's experience,this paper describes a parametric algorithm capable of producing Siheyuan variants within a 4D CAD environment which by transforming the original design principles into an algorithm contributes to an understanding of Siheyuan typology and their preservation.This algorithm was implemented in a virtual scripting environment to generate accurate virtual counterparts of historical orextant Siheyuan houses revealing the tacit computational rules underlying traditional Chinese architecture.
基金This work was supported by national funds through Fundacao para a Ciencia-a Tecnologia(FCT)with references UID/CEC/50021/2019 and PTDC/ART-DAQ/31061/2017by the PhD grants under contract of FCT with references SFRH/BD/128628/2017 and SFRH/BD/98658/2013,and by the PhD grant under contract of University of Lisbon(UL),Instituto Superior Tecnico(IST)and the research unit Investigacao-Inovacao em Engenharia Civil para a Sustentabilidade(CERIS).
文摘Computation-based approaches in design have emerged in the last decades and rapidly became popular among architects and other designers.Design professionals and researchers adopted different terminologies to address these approaches.However,some terms are used ambiguously and inconsistently,and different terms are commonly used to express the same concept.This paper discusses computational design(CD)and proposes an improved and sound taxonomy for a set of key CD terms,namely,parametric,generative,and algorithmic design,based on an extensive literature review from which different definitions by various authors were collected,analyzed,and compared.
文摘The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a very much essential task. This paper presents a systematic review of different models used for software cost estimation which includes algorithmic methods, non-algorithmic methods and learning-oriented methods. The models considered in this review include both the traditional and the recent approaches for software cost estimation. The main objective of this paper is to provide an overview of software cost estimation models and summarize their strengths, weakness, accuracy, amount of data needed, and validation techniques used. Our findings show, in general, neural network based models outperforms other cost estimation techniques. However, no one technique fits every problem and we recommend practitioners to search for the model that best fit their needs.
文摘A novel notion of self-organization whose major property is that it brings about the execution of semantic intelligence as spontaneous physico-chemical processes in an unspecified ever-changing non-uniform environment is introduced. Its greatest advantage is that the covariance of causality encapsulated in any piece of semantic intelligence is provided with a great diversity of its individuality viewed as the properties of the current response and its reproducibility viewed as causality encapsulated in any of the homeostatic patterns. Alongside, the consistency of the functional metrics, which is always Euclidean, with any metrics of the space-time renders the proposed notion of self-organization ubiquitously available.
文摘The proposed design methodology combines data analysis, algorithmic design, linguistics, mathematics physics to analyze methodologically the complex system of the urban fabric. The implementation of this design mechanism uses the urban fabric of Eleusis as a paradigm. The urban body of Eleusis is defined as a palimpsest of descriptions and images, synthesized spatially in multiple layers and is therefore considered to be a suitable sample for the preliminary application of the proposed methodology. Urban reality can be described as a complex system as it consists of both tangible and intangible elements, the characteristics of which are quantified by the context and the logical descriptions in which they are incorporated. However, descriptive logic changes constantly, following the development of multiplicity and the extension of concepts. Therefore, the body of urban reality is redefined continuously following the change in both logical descriptions and contexts. Considering that each framework draws up an ideology and chooses to analyze its’ core meaning, a linguistic analysis tool is developed for the combination of data visualization analysis, linguistic, and design methodologies to parameterize description of logics. Written speech is transformed into networks visualizing their ontological relationships. This creates a nebula of data (cluster) that assembles the immaterial reality of the urban fabric which is then transformed into local parameters by the appropriate methodological processes giving the relation between the nebula of descriptions and the locality of the network of descriptions. The material reality of the urban fabric is described through analyzing its’ grid transformations. The infinite body1 The objective of this research focuses on the construction of reality. The term reality is linked to pragmatism and derives from the ancient Greek word “pragma” and refers to anything that can be revealed through our senses of reality is synthesized through the composition of its’ material and
文摘This paper is focusing on the application of algorithmic modelling techniques to represent,design and fabricate gravitational lens effects(as described by the astrophysical theory of the'Dark Matter’)in form of a garden pavilion for the Royal Horticultural Society’s,Chelsea Flower Show.In addition,this research-led project is exploring the challenges occurring in the use of three-dimensional CNC bending technologies.This is a research by design project and its method is based on a design framework,which incorporates a generative algorithm linked to feedback loops related to parameters such as laws of gravity,plot dimensions,materiality,positioning of the plants,construction and fabrication,requirements and cost as well as the overall aesthetics.Its findings are highlighting accomplishments and failures of a file to factory design and fabrication process,which incorporates algorithmic modelling and digital manufacturing techniques in a collaborative environment.The'Dark Matter Garden’installation was awarded the gold medal for'Best Fresh Garden’by the Royal Horticultural Society in 2015.
文摘Reinforcement learning provides a cognitive science perspective to behavior and sequential decision making providedthat reinforcement learning algorithms introduce a computational concept of agency to the learning problem.Hence it addresses an abstract class of problems that can be characterized as follows: An algorithm confronted withinformation from an unknown environment is supposed to find step wise an optimal way to behave based only on somesparse, delayed or noisy feedback from some environment, that changes according to the algorithm’s behavior. Hencereinforcement learning offers an abstraction to the problem of goal-directed learning from interaction. The paper offersan opinionated introduction in the algorithmic advantages and drawbacks of several algorithmic approaches to providealgorithmic design options.
基金Project supported by the National Natural Science Foundation of China(Nos.41172116,U1261212,and 51134005)
文摘The algorithmic tangent modulus at finite strains in current configuration plays an important role in the nonlinear finite element method. In this work, the exact tensorial forms of the algorithmic tangent modulus at finite strains are derived in the principal space and their corresponding matrix expressions are also presented. The algorithmic tangent modulus consists of two terms. The first term depends on a specific yield surface, while the second term is independent of the specific yield surface. The elastoplastic matrix in the principal space associated with the specific yield surface is derived by the logarithmic strains in terms of the local multiplicative decomposition. The Drucker-Prager yield function of elastoplastic material is used as a numerical example to verify the present algorithmic tangent modulus at finite strains.
文摘Background: Discrete clinical and pathological subtypes of Alzheimer’s disease (AD) with variable presentations and rates of progression are well known. These subtypes may have specific patterns of regional brain atrophy, which are identifiable on MRI scans. Methods: To examine distinct regions which had distinct underlying patterns of cortical atrophy, factor analytic techniques applied to structural MRI volumetric data from cognitively normal (CN) (n = 202), amnestic mild cognitive impairment (aMCI) (n = 333) or mild AD (n = 146) subjects, in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database was applied. This revealed the existence of two neocortical (NeoC-1 and NeoC-2), and a limbic cluster of atrophic brain regions. The frequency and clinical correlates of these regional patterns of atrophy were evaluated among the three diagnostic groups, and the rates of progression from aMCI to AD, over 24 months were evaluated. Results: Discernable patterns of regional atrophy were observed in about 29% of CN, 55% of aMCI and 83% of AD subjects. Heterogeneity in clinical presentation and APOE ε4 frequency were associated with regional patterns of atrophy on MRI scans. The most rapid progression rates to dementia among aMCI subjects (n = 224), over a 24-month period, were in those with NeoC-1 regional impairment (68.2%), followed by the Limbic regional impairment (48.8%). The same pattern of results was observed when only aMCI amyloid positive subjects were examined. Conclusions: The neuroimaging results closely parallel findings described recently among AD patients with the hippocampal sparing and limbic subtypes of AD neuropathology at autopsy. We conclude that NeoC-1, Limbic and other patterns of MRI atrophy may be useful markers for predicting the rate of progression of aMCI to AD and could have utility selecting individuals at higher risk for progression in clinical trials.
文摘The paper is focused on available server management in Internet connected network environments. The local backup servers are hooked up by LAN and replace broken main server immediately and several different types of backup servers are also considered. The remote backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network. A Virtual Private Network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The remote backup servers also replace broken main severs immediately under the different conditions with local backups. When the system performs a mandatory routine maintenance of main and local backup servers, auxiliary servers from other location are being used for backups during idle periods. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems. The operational workflow give the guidelines for the actual implementations.
文摘We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.
文摘Binary Decision Diagrams (BDDs) can be graphically manipulated to reduce the number of nodes and hence the area. In this context, ordering of BDDs play a major role. Most of the algorithms for input variable ordering of OBDD focus primarily on area minimization. However, suitable input variable ordering helps in minimizing the power consumption also. In this particular work, we have proposed two algorithms namely, a genetic algorithm based technique and a branch and bound algorithm to find an optimal input variable order. Of course, the node reordering is taken care of by the standard BDD package buddy-2.4. Moreover, we have evaluated the performances of the proposed algorithms by running an exhaustive search program. Experi-mental results show a substantial saving in area and power. We have also compared our techniques with other state-of-art techniques of variable ordering for OBDDs and found to give superior results.
文摘This article describes the development of an application for generating tonal melodies. The goal of the project is to ascertain our current understanding of tonal music by means of algorithmic music generation. The method followed consists of four stages: 1) selection of music-theoretical insights, 2) translation of these insights into a set of principles, 3) conversion of the principles into a computational model having the form of an algorithm for music generation, 4) testing the “music” generated by the algorithm to evaluate the adequacy of the model. As an example, the method is implemented in Melody Generator, an algorithm for generating tonal melodies. The program has a structure suited for generating, displaying, playing and storing melodies, functions which are all accessible via a dedicated interface. The actual generation of melodies, is based in part on constraints imposed by the tonal context, i.e. by meter and key, the settings of which are controlled by means of parameters on the interface. For another part, it is based upon a set of construction principles including the notion of a hierarchical organization, and the idea that melodies consist of a skeleton that may be elaborated in various ways. After these aspects were implemented as specific sub-algorithms, the device produces simple but well-structured tonal melodies.