Covered wooden bridges and the principles of heavy timber framing by which they were built represent both a significant chapter in this country’s civil engineering heritage, and a subclass of bridges that are in immediate need of repair and rehabilitation. This study attempts to increase the information available to engineers who perform design work on wooden truss bridges by exploring their system and component behaviors through experimental tests and numerical models. Four bridges were considered as case studies: Morgan Bridge, a queen post truss; Pine Grove Bridge, a Burr arch-truss; Taftsville Bridge, a multiple king post truss with arch; and Contoocook Bridge, a Town lattice truss.

It this presentation, some recent developments in verification and validation (V&V) of predictive models are introduced. *Verification *is a mathematical concept which aims at assessing the accuracy of the solution of a given computational simulation compare to sufficiently accurate or analytical solutions. *Validation*, on the other hand, is a physics-based issue that aims at appraising the accuracy of a computational simulation compare to experimental data.

The proposed developments cast V&V in the form of an approximation-theoretic representation that permits their clear mathematical definition and resolution. In particular, three types of problems will be addressed. First, a-priori and a-posteriori error analysis of spectral stochastic Galerkin schemes, a widely used tool for uncertainty propagation, are discussed. Second, a statistical procedure is developed in order to calibrate the uncertainty associated with parameters of a predictive model from experimental or model-based measurements. An important feature of such data-driven characterization algorithm, is in its ability to simultaneously represent both the intrinsic uncertainty and also the uncertainty due to data limitation. Third, a stochastic model reduction technique is proposed in order to increase the computational efficiency of spectral stochastic Galerkin schemes for the solution of complex stochastic systems.

While the second part of this research is essential in model validation phase, the first part is particularly important as it provides one with basic components of the verification phase.

A constitutive model is a relationship between material stimuli and responses. Calibration of model parameters within well-defined constitutive models is thus key to the generation of accurate model-based predictions. One limitation of traditional material calibration is that only a few standardized tests are performed for estimating constitutive parameters, which makes the calibration process eminently deterministic. Moreover, measurements taken during standardized tests are usually global readings, which implicitly assume a ‘homogeneous’ material composition, smearing out the influence of any local effects. This work introduces the Functional Bayesian (FB) formulation as a probabilistic methodology for the calibration of constitutive models that incorporates material random responses and local effects into the assessment of constitutive parameters. This particular calibration process is known as the probabilistic solution to the inverse problem. Estimates of the statistics required for the Bayesian solution are obtained from a series of standard triaxial tests which are coupled with 3-Dimensional (3D) stereo digital images allowing for the capturing of material local effects. In addition, the probabilistic method includes the spatial representation of elemental ‘material’ properties by introducing spatially varying parameters within a 3D Finite Element Model (3D-FEM) to reproduce to the extent possible the actual heterogeneous response of the material. The sampling of spatial ‘material’ realizations is performed by the Polynomial Chaos (PC) method, which permits the simulation of multi-dimensional non-Gaussian and non-stationary random fields. Integration of the random parameters is performed via Markov Chain Monte-Carlo and Metropolis-Hastings algorithms. The calibration of a soil sample is presented as a case study to illustrate the applicability of the method when the soil response lies within the linear elastic domain. Calibration results show a probabilistic description of the spatially distributed parameters and of the coefficients of the chaos representation that defines it. Inferences retrieved from the MCMC sampling include the analysis of the ‘material’ properties and of the coefficients of the PC representation which enhances understanding of the randomness associated with the material composition and response.

Observed behavior of most physical systems differs from the behavior of their deterministic predictive models. Probabilistic methods over a way to model and analyze a system such that the discrepancy in behavior of the predictive model and of the actual system is minimal in some sense. Certain parameters in these predictive models are represented as random quantities. Random eigenvalue problem arises naturally in common procedures for analyzing the behavior of such models.

Main contribution of this thesis is to present a new insight and method for the analysis of random eigenvalue problem. Three methods are used here to characterize the solution of the random eigenvalue problem, namely, the Taylor series based perturbation expansion, the polynomial chaos expansion coupled with Galerkin projection, and Monte Carlo simulation.

It is observed that the polynomial chaos based method gives more accurate estimates of the statistical moments than the perturbation method, especially for the higher modes. The difference of accuracy in these two methods is more pronounced as the system variability increases. Moreover, the chaos expansion gives more detailed probabilistic description of the eigenvalues and the eigenvectors. However, currently available statistical simulation based method of estimating the chaos coefficients is computationally intensive, and accuracy of the estimated coefficients is influenced by the problems associated with random number generation. To circumvent these problems an efficient method for estimating the coefficients is proposed. This method uses a Galerkin based approach by orthogonalizing the residual in the eigenvalue-eigenvector equation to the subspace spanned by the basis functions used for approximation.

A new representation of the statistics of the random eigenvectors is proposed to capture the model interaction. This representation offers a more detailed description and clearer prediction model of the behavior of the mode shapes of an uncertain system. This representation can also be used in efficient and accurate system reduction. An enriched version of the chaos expansion is proposed that will be helpful in capturing the behavior of the eigenvalues and eigenvectors for the systems with repeated or closely spaced eigenvalues.

Due to limitations in their manufacturing stage, many composites can exhibit a considerable level of randomness in their microstructure. Such variations can affect the mechanical response of the resulting specimens, especially when localized phenomena are involved. When multi-phase materials exhibit statistically inhomogeneous characteristics, as in the case of random media with a built in gradient in composition, the analysis becomes considerably more complex as the spatial dependency of their probabilistic descriptors cannot be disregarded.

This thesis presents a probabilistic simulation approach to the issue of characterizing and generating samples of such composites. Furthermore, the effects of randomness on the mechanical properties of these materials are investigated using micromechanical-based techniques. A novel method that is capable of generating non-Gaussian, non-stationary samples through a non-linear translation technique is introduced and applied to the generation of two-phase random media.

The effect of spatial fluctuations on the local material response is measured by coupling the probabilistic framework of the analysis with micromechanical homogenization techniques. Two distinct cases are presented as an illustration of the difficulties involved in the analysis of statistically inhomogeneous composites and to measure the performance of the methods developed here.

Under certain wind conditions, stay cables of cable-stayed bridges have frequently exhibited large-amplitude vibrations. Such vibrations are often associated with the occurrence of rain, but large-amplitude vibrations without rainfall have also been observed. The mechanisms of these vibrations are still not well understood, and it is unclear where the vibrations occurring with and without rainfall are related. Unless fully addressed, these problems significantly hinder the rational design of effective mitigation countermeasures for the vibrations, which potentially threatens the safety and serviceability of cable-stayed bridges. This study was conducted to understand the mechanisms of wind- and rain-wind-induced stay cable vibrations based on full-scale measurements of prototype vibrations in the field and tests of sectional models in the wind tunnel.

A parametric study of the stay cables is first performed based on full-scale measurement data. The Hilbert Transform is used to estimate the modal frequencies of stay cables, revealing that stay cables can essentially be treated as taut strings. The modal damping is assessed based on both ambient vibration and forced vibration data, indicating that the level of damping is very low in stay cables and that it is affected by the dynamic energy exchange between the cables and other structural elements of the bridge. Observed characteristics of stay cable vibrations, as well as their correlation with wind and rain are presented. Based on these characteristics and correlations, several different types of vibrations are identified. In particular, important similarities between the frequently occurring large-amplitude rain-wind-induced vibrations and the classical Kármán-vortex-induced vibrations are explored and compared to a type of large-amplitude dry cable vibrations, providing significant insights to the mechanisms of these types of vibrations.

To verify the observations in the field, sectional cable models were tested in the wind tunnel, revealing the inherent vortex-induced type of instability of yawed and inclined cables over a range of high reduced velocity. The observations in the wind tunnel are also compared with the results of previous wind tunnel tests reported in the literature. Based on the understanding from both the field and the wind tunnel, a framework is proposed for modeling of the vortex-induced type large-amplitude vibrations at high reduced velocity. The potential application of this model is also discussed. The observed vibrations are also used to assess the performance of passive viscous dampers and cross-ties in mitigating wind- and rain-wind-induced stay cable vibrations. Recommendations for rational design of these mitigation devices are also provided.

Laterally braced cold-formed steel beams generally fail due to local and/or distortional buckling in combination with yielding. For many cold-formed steel studs, joists, purloins, or girds, distortional buckling may be the predominant buckling mode, unless the compression flange is partially restrained by attachment to sheathing or paneling. However distortional buckling of cold-formed steel beams remains a largely unaddressed problem in the current North American Specification for the Design of Cold-Formed Steel Structural Members (NAS). Further, adequate experimental data on unrestricted distortional buckling in bending is unavailable. Therefore, two series of bending tests on industry standard cold-formed steel C and Z-sections were performed and presented in this dissertation. The testing setup was carefully designed in the first series of tests (phase 1) to allow the form of local buckling failure while restrict distortional and lateral-tortional buckling. The second series of bending tests used nominally identical specimens to the first phase and a similar testing setup. However, the corrugated panel attached to the compression flange was removed in the constant moment region so that distortional buckling may occur. The experimental data was used to examine current specifications and new design method. Finite element model by ABAQUS was developed and verified by the two series of bending tests and a number of cold-formed steel beams were analyzed by the finite element analysis.

An analytical method was derived to determine the elastic buckling stress of thin plates under stress gradient. The buckling coefficients for stress gradient situations were given. The stress gradient effect on the ultimate strength of thin plates was studied by finite element analysis. It was found that the stress gradient increases the buckling load of both stiffened and unstiffened elements and current design provision may provide good strength prediction if the correct elastic buckling coefficient is used.

Since distortional buckling characterizes relatively long buckling waves thus may be subjected to significant influence by the moment gradient. Therefore the moment gradient effect on the distortional buckling of cold-formed steel beams was studied by the finite element analysis. The results show that the moment gradient increases both the elastic distortional buckling moment and the distortional buckling strength of sections. A draft design provision was proposed for the Direct Strength Method to account for the moment gradient effect.

The tests have demonstrated that partial restraint on the compression flange may have significant influence on the buckling mode and strength of cold-formed steel beams. And currently design methods lack considerations for the partial restraint effect on the distortional buckling. Therefore research was conducted to explore the distortional buckling behavior of cold-formed beams with partial restraint on the compression flange. A simple numerical model was proposed to calculate the elastic buckling moment of the sections with partial restraints and recommendation for design was given.

Simple hand solutions for calculating the elastic buckling of cold-firmed steel sections were developed for design purpose and draft provisions for the NAS to account for the moment gradient effect were proposed.

**Title: Portfolio Optimization and Value of Information for Catastrophe Insurance**

Quantifying losses inferred from natural catastrophes is a crucial part in our ability to understand and manage the damage caused by these catastrophic events. A significant component in reducing the uncertainty present in loss estimation can be associated with the use of better information in relation with such factors as exact building geometry, construction quality, design, and vulnerability analysis. The question remains though, how to judge whether the enhancement in the losses accuracy justifies the cost of obtaining this improved information.

This study is an effort towards presenting a procedure for integrating, analyzing and evaluating the impact of improved losses information on insurance portfolio-related decisions. A conceptual methodology is proposed in the aim to help insurers decide on the optimal information resolution that is best-suited for the portfolio analysis. The sensitivity analysis emphasizes on the error between simulated losses obtained from default building data versus losses obtained from enhanced information and how this error translates into misleading insurance objectives predictions. For that matter, an insurance portfolio optimization problem is also suggested offering to maximize profit and control exposure risk. Here two new components are incorporated, the means to control the correlation among losses and the ability to reach a geographically and structurally resolved portfolio.

Title: Optimization Using Noisy Simulations: Trust Region, Surrogate Surface, and Adaptive Sampling

In engineering design, computer simulations are often used in optimization. To enhance robustness, it is necessary to include uncertainties that may come from physical randomness, insufficient knowledge or imperfect modeling. In this dissertation, the problem of optimization with noisy simulations is considered. The input parameters are separated into two groups: design variables and random factors. The optimization goal is to find the values for the design variables which minimize expected costs under uncertainties as quantified by the random factors. The approach is to integrate surrogate-surface methods into a framework of trust-region-based sequential minimization.

A significant portion of the dissertation is devoted to establishing provable convergence. Convergence proofs are derived for unconstrained and constrained optimization under a set of mathematical conditions for objective function uncertainty. The conditions are in terms of probabilistic bounds on the errors in the mean of the objective function and its gradient. If a Gaussian model is used for the errors, then the conditions can be simplified in terms of the bias, variance and mean-square values. These statistics can be estimated from simulation results.

To obtain a surrogate surface, which is simply an estimate of the mean of the objective function, local linear regression is used. This regression method is well suited for subsequent minimization by the trust-region algorithm because the support of the local regression kernel can be adapted to the size of the trust-region. Furthermore, the statistics of the errors in the regression fit can be derived in terms of a local second-order fit of the true mean function. The analysis of such a second-order fit is theoretically consistent with the second-order fit in the convergence proofs of the trust-region algorithm. Hence, it is possible to show in this dissertation, that an optimization approach based on local linear regression coupled with a trust-region algorithm is provably convergent.

To illustrate the surrogate-based optimization approach, a well-known test problem of truss design is analyzed. It is shown that optimization under uncertainty leads to a significantly different solution for the truss design as compared with an equivalent optimization problem under mean conditions.

Equation-free techniques were recently proposed to study multiscale systems where macroscopic evolution equations may not be explicitly available. So-call Coarse Time-Steppers were suggested to implement the macroscale evolution via microsimulators. Temporal derivatives can be numerically obtained based on these time-steppers instead of computing using the macroscopically explicit equations. These derivatives can therefore be incorporated with the traditional integration schemes to evolve the numerical representation of the macroscale observable. These equation-free techniques have been effectively applied in Coarse Projective Integration, Coarse Bifurcation Analysis and Coarse Dynamic Renormalization of multiscale systems. In my research, I extended the Coarse Projective Integration and Coarse Dynamic Renormalization to macroscopically multidimensional particle systems. Marginal and conditional inverse cumulative distribution functions (ICDF) were utilized to serve as the macroscale observables and it was shown that it was easy to find orthogonal basis for these observables. As a matter of fact, with these observables, multidimensional problems were converted to effectively one-dimensional problems, and Coarse Projective Integration and Coarse Renormalization can be implemented on a reduced macroscale slow manifold. It was also found that the Coarse Renormalization for self-similar multidimensional multiscale systems requires only a single template condition, instead of multiple template conditions as originally expected. The proposed technique was applied to a Brownian particle system in a Couette flow and produced results that had a good match to true evolutions and theoretical predictions.

Sequential data assimilations have been utilized in diverse scientific and engineering fields to retrieve model predictions via experimental measurements. However, their applications were limited to single-scale problems, where model predictions at one scale were retrieved or calibrated only by measurements in that scale. For multiscale systems for which microscopic observations are usually not available, it is expected to utilize measurements in macroscale to update microscopic model states. This therefore introduces problems of multiscale data assimilation. In my research, two techniques for the multiscale data assimilation were proposed. One technique coupled the model states across different scales to form an extended state. A newly devised data assimilation method, the ensemble Kalman filter (EnKF), was applied to update this extended model state, from which the updated states in different scales can then be extracted. The other technique employed the Coarse Time-Stepper. The microscopic states were first restricted to the macroscale slow manifold, where corresponding macroscale states were updated or retrieved via the EnKF. The updated macroscale states were subsequently lifted back to the microscale space and updated state statistics in the microscale can thus be obtained. Estimations on boundary particle fluxes and on particle positions in a one-dimensional domain were used respectively to exemplify the two proposed techniques and they were shown to be able to give updated statistics that agreed well with true statistics.

Performance based design (PBD) is emerging as the guiding principle for the next generation of structural design specifications. PBD provides the engineer with greater flexibility to select appropriate performance criteria and prediction techniques, but also demands more sophisticated analyses. The presence of uncertainty in structural analysis, behavior and design — especially in the prediction of new performance measures — requires a probabilistic approach to PBD.

The first component of this research considers reliability-based specifications for PBD, using the example of advanced analysis of steel frames. Design by advanced analysis uses non-linear structural analyses to predict system performance measures. Current advanced analysis proposals use the resistance factors of the load and resistance factor design (LRFD) specifications with no probabilistic justification. The probabilities of failure of sixteen, two-story, two-bay steel frames, design by both LRFD and advanced analysis are estimated using Monte Carlo simulation and importance sampling schemes. The simulated strength and load distributions are used to develop resistance factors for the limit states of first plastic hinge and plastic collapse. The results indicate that design by advanced analysis can maintain the desired reliability for system failure, but may result in unsatisfactory serviceability performance. Two particular difficulties of reliability-based specifications for design by advanced analysis are discussed — practical calibration for system-based limit states, and the determination of resistance factors applicable to a wide class of structures.

The second component of this research applies Bayesian surrogate models to engineering design, which is viewed as an iterative process of information gathering and decision making. A Bayesian surrogate model relates individual design variables to system performance, including both aleatory and epistemic uncertainties. Bayesian surrogate models can incorporate prior knowledge, update knowledge based on evidence, and propose design revisions. A Bayesian network is used to update the parameters of the surrogate model based on information collected from trial designs. Techniques of Bayesian experimental design are applied to propose design revisions which maximize the expected information gain or relative entropy. The Bayesian surrogate framework is applied to several structural design examples. The results suggest the need to develop new information criteria specific to engineering design and PBD.

Fundamental period and damping ratio are two of the most important parameters involved in dynamic analyses of buildings. These parameters are usually assigned constant values typically through the use of simplified models or by using engineering judgment. The variability associated with these values is frequently ignored. Measurements of these parameters in the completed structure may or may not match those assumed at the design stage and the effects and implications of such differences are usually not fully explored or understood.

To develop models that reliably estimate the values of period and damping expected in actual structures, a comprehensive database of full-scale measurements was compiled and rigorously analyzed. An analysis of variance (ANOVA) identifies the number of stories for the period data and the number of stories and level of vibration for the damping data a key factors that potentially affect each parameter. Estimation models are developed for different combinations of factors and model performance, which is measured through the standard error, is observed to improve with additional factors. Models are greatly simplified through constrained variations in model coefficients and ar considered to appreciably improve the state-of-the-art in period and damping estimation.

The large quantity of data allows for a proper and careful analysis of the variability in each estimation model. Model variability is quantified through two functions: a scalar value that represents the variability among measurements made on the same building and in the same lateral direction, and a function that represents the variability observed from building to building. A rigorous form representing the model variability is provided along with a much simpler form developed for possible inclusion into standards.

The effect of parameter variability on seismic response estimates was investigated. A proposed, performance-driven design procedure identifies period and damping values that achieve a specified level of performance. For a general seismic design spectrum, the engineer can apply a level of conservatism to the performance level or to period and damping selection from regions of practical values, which are defined through the parameter distributions. The effects of variability reduction are observed and possible direction for future development is discussed.

Models of uncertainty have wide application beyond reliability estimation. In this talk, a model related to computer science approaches is used to solve problems in global optimization and structural mechanics. In contrast to usual statistical methods, a classifier that uses Bayesian classification trees is adopted. In this method, human expertise is quantitatively modeled and used to construct the feature space. Knowledge functions are defined in the feature space, approximating the distribution of promising designs. Within feature space, promising designs, which can be widely scattered in the original high-dimensional design space, become concentrated in a relatively small number of discrete subregions. Knowledge function provides an efficient way to generate the starting points for multi-start global optimization strategy. Furthermore, the classifier provides an efficient knowledge transfer mechanism through reuse of the knowledge functions for solving related, more complex problems.

The method is demonstrated in the design of thin-walled steel columns, where the design space is too large to be effectively handled by common evolutionary techniques such as genetic algorithms. The method is also demonstrated with an entirely different problem of an analysis of a composite material with random properties. In the latter problem it is shown that the features, obtained by principle component analysis of spatial patterns, are closely related to Eshelby’s theorem.

Microfluidic Systems are increasingly stimulating considerable interest in both industry and academia. In the construction of high fidelity models capable of adequately predicting the behavior of these devices, uncertainty quantification (UQ) emerges as a main ingredient for resource allocation, engineering design, and model validation. This thesis demonstrates the application of a (UQ) methodology based on a spectral polynomial chaos approach to the modeling of electrokinetically and pressure-driven microchannel flow.

A numerical study of band crossing chemical reactions is first conducted and general solution trends are interpreted in terms of a reduced set of dimensionless parameters. The capability of (UQ) techniques is then illustrated in the context of reduced design models for straight and serpentine channels. Using stochastic UQ tools, deterministic design rules are converted into design envelopes, highlighting the impact of uncertainties in design and operating parameters. Finally, the UQ methodology is extended to fully coupled 2D model for electrokinetically pumped microchannel flow of a reacting mixture. Case studies are presented which investigate sample dispersive mechanisms due to buffer disturbances and random variability in zeta potential.

An important consideration in the design of long-span bridges is the effect of wind loading on the bridge response. Commonly used methods to assess the wind-induced response of these structures include wind tunnel testing and/or analytical approaches that require experimentally determined parameters. While these techniques have predominantly been compared with each other, there has been few opportunities to evaluate them using actual bridge responses. Motivated by this idea, a long-term full-scale measurement program was conducted on a cable-stayed bridge for measuring its response under a range of meteorological conditions. The measured responses were compared with predictions obtained from a multi-mode frequency-domain approach, which was able to capture the coupling between closely spaced modes of the structure. Where possible, input parameters used in the analysis were calibrated using measured quantities at the bridge, to ensure that they were representative. Also, vortex-induced vibrations of the bridge were investigated, and such events were carefully identified. The response comparisons showed that the predictions were in good agreement with measured values, successfully capturing the buffeting response. A parameter study identified the vertical wind spectrum to be one of the primary factors influencing bridge response, and indicated that proper calibration of spectral models used in the analysis provided improved predictions for some of the records.

One of the new challenges in Civil Engineering involves the analysis of uncertainty in complex engineering systems. As the accuracy of measurements increases and new composite materials are introduced, we start looking into the behavior of systems that span a wide range of scales from the atomic scale to the scale of continuum mechanics. The study on the role of uncertainty and its propagation should serve as a principal guideline that one must follow in investigation. Of particular interest to us are systems comprised of materials with microstructure that cannot be neglected in comparison to the size of the systems. In modeling such materials, the classical continuum mechanics, or the local theory, may lead to predictions deviating due to neglecting nonlocal interactions between microstructures and the accompanying effects. Basic modifications must be made to the local theory as we start to investigate the nonlocal effects.

The objective of the present research is to construct a material model that is consistent with the variability of heterogeneity and nonlocal interactions of material at the microstructure. The modeling of microstructural variability, the propagation of uncertainties across scales and the prediction of response uncertainties are the emphases in this modeling procedure.

This dissertation focuses on the theoretical treatment for the stochastic modeling of materials with random microstructures in the framework of nonlocal theories. In this work, the random microstructural interactions are represented by the integration of subscale variables and become a part of the constitutive equation for global state variables as in the classical nonlocal field theory. The integration reflects the contribution from the subscale to the global states. The Green’s function associated with the integration should be calibrated as a constitutive property. The global behavior, being the overall contributions accumulated from all the scales, must satisfy the admissible condition and boundary conditions. In this manner the behavior of a multiscale system can be stated as a boundary value problem.

Recent natural disasters, such as the earthquakes at Northridge, California and Kobe, Japan and hurricanes Hugo and Andrew, have inflicted enormous economic losses on the public and the insurance industry. These losses and resulting impacts have led to renewed interest in development and implementation of performance-based design (PBD). The performance levels in typical PBD recommendations are mapped to measurable structural responses and limit states. To facilitate this development, an efficient procedure to assess system reliabilities of realistic structures accurately is needed. This dissertation is dedicated to developing such a procedure.

Analysis of the reliability of complex structural systems requires an efficient simulation procedure coupled with finite element analysis. Directional simulation (DS) is among the most efficient methods for system reliability analysis in the sense that every direction can yield information about system failure. However, the randomly generated directions may not represent the underlying probability distributions very well when the number of directions is limited. Various point sets, which are collectively named deterministic point sets (DPS) herein and have been developed in different domains of science and engineering, have high fidelity in representing the distribution and can reduce simulation error. DPS from the uniform distribution are emphasized herein, since the uniform distribution is commonly used in DS. DPS include spherical t-designs, Fekete points, GLP points, spiral points, and advanced hyperspace division method (AHDM) points. Extensive tests on the efficiency and accuracy of these point sets in system reliability analysis are conducted. Fekete point sets are shown to have some particularly attractive features in terms of accuracy.

Two types of neural networks, namely the feed-forward back-propagation network and the radial basis network, are utilized to further improve the efficiency in a two-phase point refinement scheme based on the Fekete method. The neural network works as a parallel concept to importance sampling in identifying the regions in hyperspace that contribute significantly to the failure probability. The Fekete point method and neural network technique form the essential statistical module denoted “FeketeNN” used to perform system reliability analyses in this dissertation.

Load space formulation has been shown to be particularly useful in limiting the number of calls to the finite element programs in system reliability analysis. These techniques are demonstrated using several realistic plane steel structures. With the help of the load space formulation, the FeketeNN method can achieve accurate estimates of the system failure probabilities efficiently.

In the rational prediction of the behavior of physical systems, models are often relied upon. These predictive tools are calibrated in terms of parameters, on the basis of data. A recurrent phenomenon in this context s the random scatter in model parameters. Stochastic models have thus been developed, in which the parameters are treated as a random entity. The probabilistic characterization of the parameters is often hampered by practical limitations and induces inaccuracies in the stochastic predictions of the response.

This thesis reports a novel methodology to estimate the error in stochastic model-based predictions. It relies on the response representation in a Polynomial-Chaos basis. The error is approximated via Taylor expansion and thus hinges on the explicit computation of the stochastic response gradient. The computed error estimate sheds light on the sensitivity of particular response statistics with respect to statistics of the stochastic parameters. This helps to raise the confidence in the model predictions.

The method is demonstrated on two model problems, involving a Bernoulli beam with random bending rigidity and the potential flow in a porous medium with random conductivity. In both cases the parameters are modelled as a random field and are discretized with the Karhunen-Loeve expansion. The finite element method is used for the spatial discretization.

This dissertation presents results of an experimental and theoretical investigation of the cross-anisotropic behavior of gravitationally deposited sands under general three-dimensional loading conditions.

The laboratory study included a series of cubical triaxial tests using a true triaxial apparatus with improved accuracy of measurements and control. The cubical tests were performed on Santa Monica beach sand under general stress conditions allowing for exploration of the entire range of Lode’s angles [0°, 180°], necessary for complete description of a cross-anisotropic material. Stress-strain behavior and strength, failure patterns and shear banding, volumetric response and dilative properties were analyzed in view of the inherent material anisotropy.

A series of high-pressure isotropic compression tests were performed on Santa Monica beach and Nevada sands using four different densities and two specimen deposition methods. The evolution of the inclination of the plastic strain increment vector to the hydrostatic axis with pressure was evaluated and analyzed using different elastic models.

A principle of rotation of the principal stress coordinate system to account for the experimentally observed transverse isotropic effects was introduced. A cross-anisotropic constitutive model was derived based on the existing model for isotropic materials. A thorough analysis of response of each component of the modified model was performed. The new model was implemented to predict the results of the cubical triaxial tests producing good fits with the experimental data.

The objective of the present work is to quantify and manage the confidence in model-based predictions associated with complex systems as exemplified by pollution transport in a watershed system. A probabilistic framework is adopted for representing uncertainty and a constrained optimization problem is posed, the solution of which provides the strategy for resource allocation that will maximize the target confidence. The hydrologic cycle, which involves multi-physics phenomena, is the driving force behind the transport of pollutants in the watershed. Mechanisms for pollutant transport which are addressed in this work include surface runoff and advection in streams and rivers. These different modes of transport when coupled together form an integrated transport model for a given watershed. The thesis addresses the flow of data and information between the components making up this model. Given the nature of this problem which features natural variability and complex boundary conditions, the properties of the parameters of the sub-models are modeled as spatially, and sometimes temporally, varying random processes. The Karhunen-Loeve expansion is used to represent these processes in terms of a denumerable set of random variables. Then, as a result, the predicted state variables are identified with their coordinates with respect to a basis formed by the Polynomial Chaos random variables. Once the coefficients in the Polynomial Chaos representation have been computed, a complete probabilistic characterization of the state variables processes can be obtained. It is worth noting that a treatment to the interaction across the interfaces of the sub-models is essential for the proper analysis of the propagation. An optimization algorithm scheme is then developed that incorporates budget constraint component, while minimizing the uncertainty of the final prediction by selectively reducing the uncertainty of the input parameters. The thesis makes original contributions to the computational modeling of integrated uncertain systems and to the management of uncertainty in the associated predictions.

An improved understanding of the behavior of ships after running aground could lessen the environmental and economic damage caused by ship groundings. Wave forces often push grounded ships towards the beach, sometimes so far ashore that they become unreachable by salvage vessels. An estimate of the distance a grounded ship may migrate in a given time would help ship owners, insurers, and government officials make critical decisions in the initial hours after a ship grounding. The present study analyzes linear and nonlinear grounded ship motions, both experimentally and theoretically. Experiments were conducted to measure the motion response of an embedded ship hull at model-scale to both small-amplitude and solitary waves. The predicted oscillatory motion responses, based upon prior theoretical work on the linear motion of grounded ships, are compared to results from the small-amplitude wave experiments. A new method is presented to predict the distance a grounded ship will migrate ashore in a given time. This method shows good correlation with the migration distances observed in the solitary wave experiments.

Many cable-stayed bridges around the world have exhibited excessive wind-induced vibrations of the main stays, inducing undue stresses and fatigue in the cables. To suppress these vibrations, fluid dampers are often attached to stays near the anchorages. To enable effective and economical design of such dampers, it is important to develop a thorough understanding of the dynamics of a stay cable with attached damper.

To investigate the dynamics of the cable-damper system, a fairly simple model is first considered: a taut string with linear viscous damper. An analytical formulation of the free vibration problem is used to explore the solution characteristics, revealing that damper-induced frequency shifts play an important role in characterizing the response of the system due to the concentrated nature of the damping force. A critical value of the damper coefficient is identified, and for a supercritical damper, certain modes of vibration are completely suppressed, while others emerge, including a non-oscillatory decaying mode.

The influence of bending stiffness is considered using a dynamic stiffness formulation of the free-vibration problem for a tensioned beam with attached damper. Many of the solution characteristics observed in this case are reminiscent of those for the taut string, and damper-induced frequency shifts are again important. The nature of the boundary conditions has a significant effect when bending stiffness is appreciable, and for a damper located near the end of a tensioned beam, significantly higher damping ratios can be achieved if the supports are not fixed against rotation.

Dampers can also have nonlinear characteristics, either unintentionally or by design, and equivalent linear solutions are developed for the vibrations of a taut string with two different types of nonlinear dampers: a power-law damper and a viscous damper with a friction threshold. Relevant nondimensional parameter groupings are identified, and asymptotic approximations are obtained relating these nondimensional parameters to the modal damping ratios for cases when the damper-induced frequency shifts are small. The nature of the dependence of nonlinear damper performance on the amplitude and mode of vibration is investigated, revealing some potential advantages that may be offered by a nonlinear damper over a linear damper.

Concrete gravity dams are an important part of the nation’s infrastructure. Many dams have been in service for over 50 years, during which time important advances in the methodologies for evaluation of natural phenomena hazards have caused the design-basis events to be revised upwards, in some cases significantly. Many existing dams fail to meet these revised safety criteria and structural rehabilitation to meet newly revised criteria may be costly and difficult. A probabilistic safety analysis (PSA) provides a rational safety assessment and decision-making tool managing the various sources of uncertainty that may impact dam performance. Fragility analysis, which depicts the uncertainty in the safety margin above specified hazard levels, is a fundamental tool in a PSA.

This study presents a methodology for developing fragilities of concrete gravity dams to assess their performance against hydrologic and seismic hazards. Models of varying degree of complexity and sophistication were considered and compared. The methodology is illustrated using the Bluestone Dam on the New River in West Virginia, which was designed in the late 1930’s. The hydrologic fragilities showed that the Bluestone Dam is unlikely to become unstable at the revised probable maximum flood (PMF), but it is likely that there will be significant cracking at the heel of the dam. On the other hand, the seismic fragility analysis indicated that sliding is likely, if the dam were to be subjected to a maximum credible earthquake (MCE). Moreover, there will likely be tensile cracking at the neck of the dam at this level of seismic excitation.

Probabilities of relatively severe limit states appear to be only marginally affected by extremely rare events (e.g. the PMF and MCE). Moreover, the risks posed by the extreme floods and earthquakes were not balanced for the Bluestone Dam, with seismic hazard posing a relatively higher risk. Limit state probabilities structural damage are much larger than the “de minimus” risk acceptable to society, and further investigation involving benefit-cost analyses to assess the risk posed by the Bluestone Dam appears warranted.

The macroscopic engineering properties of soil depend on microstructure features of the soil: mineral constituent of individual soil particles, pore fluid chemistry and particle arrangements. Due to the small size of clay particles, physico-chemical interactions, mainly double-layer repulsive interaction and van der Waals attractive interaction, between clay particles are as important as mechanical interactions. This thesis attempts to study the three-dimensional behavior of cohesive soil from the microstructure point of view with the help of the discrete element method.

Rational and practical procedures to calculate the double-layer repulsive force and van der Waals attractive force between two cuboid clay particles in three-dimensional space are developed. Using cuboids to represent clay particles, a three-dimensional discrete element method program for cohesive soil is developed.

One-dimensional compression of kaolinite assembly is simulated using a randomly generated 400-particle numerical assembly. The discrete element program is capable of capturing the representative trend of the compression behavior in terms of compressibility and anisotropy. Quantitatively the numerical results are in the same range of laboratory experimental results.

A preliminary study of the influence of pore fluid chemistry on the behavior of cohesive soil is conducted using the discrete element method. Results from a change of wall pressure as well as the number of mechanical contacts are presented.

Many engineering systems have parameters that demonstrate significant random variation in space or in time. The stochastic finite element method (SFEM), which incorporates the uncertainty of system parameters into the finite element formulation, has become a powerful tool in analyzing complex engineering problems. The commonly used lower-order perturbation-based SFE analysis is often limited to linear or mildly nonlinear problems with small variability. Simulation-based SFE analysis is more flexible, and is applicable to virtually all types of problems. However, the efficiency of simulation-based SFE analysis is a research issue due to the computational cost of the repetitive FE analyses involved in the simulation.

The need to properly model system uncertainties gives rise to many of the numerical difficulties in a simulation-based SFE analysis and such models must be developed to achieve computational efficiency. This study addresses the effect of uncertainties on the modeling and solution of stochastic problems from a different perspective by identifying characteristics introduced by the uncertainties that can be utilized to improve the efficiency of the SFE structural analysis. Stochastic ensemble averaging was found to have a positive impact on the efficiency of the calculation of lower-order response statistics by enabling the use of coarser mesh and/or larger time steps in SFE analyses. The process of selecting proper initializations for random samples and using the solution of the closest neighboring sample as the initialization. A computationally efficient method based on a sample tree data structure was developed to implement this optimal initialization strategy. These methods were applied in stability and modal analyses of random beam and frame structures. While uncertainty often introduces numerical complexity, it also has features that, when considered appropriately, can alleviate the numberical difficulty and improve the overall efficiency of a stochastic analysis.

In recent years there has been an increased interest in applying fiber-reinforced polymer (FRP) reinforcing bars for concrete, as an alternative to steel reinforcing bars. The mechanical interaction between the FRP and the concrete, commonly called the *bond* behavior, is not well understood and has a significant effect upon the structural behavior of FRP-reinforced concrete. While many experimental bond studies have been conducted for a variety of different bars, modeling efforts to both quantify the underlying bond mechanisms and the resulting behavior have been very limited.

Smaller scale (*rib-scale*) models explicitly represent the surface structure of the bar and can thus be used to characterize the underlying mechanisms associated with the mechanical interlocking and to help optimize the surface structure of a bar. Existing rib-scale models have not addressed the progressive failure of the constituent materials, an issue addressed in this study. There is also a need to model the bond behavior of these bars at a scale amenable to the analysis of structural components. Existing “structural models” do not have sufficient generality to meet this objective since they do not address the dependence of the behavior upon the stress state or allow multiple failure modes (pullout and splitting) to be predicted — the other main issue addressed in this study.

An intermediate scale model (a *bar-scale* model originally developed for steel bars) is modified and applied to the bond of FRP bars. The model provides a macroscopic characterization of the bond behavior within the mathematical framework of elastoplasticity theory. The model incorporates a non-associated flow rule and elastoplastic coupling. Calibration and validation results for nine pullout specimens and one transfer length specimen demonstrate the model’s ability to predict the bond strength and suggest that the model has a measure of generality.

Rib-scale models for several specimens are developed to represent the mechanisms that produce the bond behavior. A micromechanical model using the *method of cells* is developed for the FRP. An elastoplastic-damage model within the framework of continuum damage mechanics is developed to characterize the plastic and damage behavior of the matrix and fibers. A simple adhesion model is developed to represent the fiber-matrix interaction. The rib-scale models are able to reproduce the bond strengths of three independent experimental studies with acceptable accuracy. The predicted failure modes and surface structure damage are consistent with experimental observations.

Harnessing the oceans’ vast, clean, and renewable energy to do useful work is a tempting prospect. For over a century, wave-energy conversion devices have been proposed, but none has emerged as a clearly practical and economical solution. One promising system is the McCabe Wave Pump (MWP), an articulated-barge system consisting of three barges hinged together with a large horizontal plate attached below the central barge. Water pumps are driven by the relative pitching motions of the barges excited by ocean waves. This high-pressure water can be used to produce potable water or electricity.

A simulation of the motions of a generic hinged-barge system is developed. The equations of motion are developed so that the nonlinear interactions between the barges are included. The simulation is general so that it can be used to study other hinged-barge systems, such as causeway ferry systems or floating airports. The simulation is used to predict the motions of a scale model that was studied in wave-tank experiments. In the experimental study, it was observed that the plate attached to the central barge acted as a pendulum. It was also observed that the phases of the pitching motions of the barges was such that the motions were enhanced by the pendulum effect at all of the wave periods studied. Hence, the increased angular displacements produced greater relative pitching motions which would lead to higher volume rates of pumped water in the operational system. The numerical simulations are found to predict the pendulum effect. In addition, the theory predicted that the after barge motions were significantly less than those of the forward barge, as was observed in the experimental study. The good agreement between the two data sets gives confidence in the ability of the theory to predict the performance of the MWP prototype.

The motions of the MWP prototype in regular ocean waves are predicted by the simulation, and its performance is calculated. By modifying the length of the system to be compatible with the wavelength for maximum pitching excitation, the power output of the system is shown to increase by more than 150%.

The need for improved procedures for civil infrastructure management has prompted a great deal of research in global structural condition assessment in recent years. In theory, global methods can identify damage in structures based upon changes in a small set of structural parameters obtained from full-scale measurements. Despite the fact that many new damage detection methods are being proposed, there remains a need for an objective means of estimating the utility of these new techniques.

Using a systems-based approach, a rational means was developed for evaluating the utility of structural condition assessment techniques. The approach considers both the engineering and the economic merits of damage detection methods in a unified fashion, in order to assess the value of the information that a given technique can provide to the management decision process. A Partially Observable Markov Decision Process (POMDP) provided a computationally tractable modeling framework that was readily adapted to this problem.

The POMDP modeling framework was applied in two application examples. The first example, which consisted of a hypothetical highway bridge system, demonstrated that a damage detection method must exceed a threshold of utility such that the information it provides can effect a noticeable change on management policy. The second example was a prototype problem designed to highlight each of the modeling tasks involved in applying the POMDP framework to a structural management problem. The following features were included: (1) mathematical descriptions of the dynamic mechanisms that influence the evolution of a structure, such as resistance deterioration, loading, and maintenance, (2) a means of probabilistically relating observations of damage and the resistance of a structure, and (3) a means of characterizing costs associated with inspection strategies and maintenance activities. As part of this example, a quantitative approach for assessing the expected value of information provided by damage detection measurements was demonstrated. several important lessons were learned from the application examples regarding the balance that must be achieved between accuracy and cost before a damage detection method can be considered worthy of implementation. Many of the conclusions drawn herein can be extended to systems of greater complexity.

This study presents the results of experimental and numerical investigations of two factors that affect the conditions for stability and instability of granular materials. These are (1) imposed volume changes and (2) time-dependent behavior. These factors are studied in triaxial compression tests and are modeled by an existing single hardening constitutive model.

The effects of imposed volume changes on the instability of granular materials are studied in experiments on a fine sand in which water is forced into or out of a sand specimen during shearing. The effects on stress-strain behavior and stability are recorded and discussed. Specimens forced to dilate more than desired in a standard drained test exhibited a tendency toward unstable behavior. Specimens forced to contract more than desired exhibited a tendency toward stable behavior. The conditions of imposed volume changes are then simulated with a single hardening constitutive model and compared with the experimental results. It is shown that the model can predict the corresponding loading paths and sand behavior as well as the resulting stable and unstable conditions.

The experiments conducted for investigation of time effects were performed on a crushed coral sand. The testing series involved shearing at different strain rates as well as jumps between strain rates, creep and relaxation tests, and stress drop tests followed by constant stress creep or constant strain relaxation tests. The experimental results are presented and discussed as a basis for development of a time-dependent version of the single hardening model. It was found that correspondence between creep and relaxation does not necessarily exist.