首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1330篇
  免费   57篇
  国内免费   1篇
管理学   32篇
民族学   1篇
人口学   9篇
丛书文集   16篇
理论方法论   8篇
综合类   68篇
社会学   26篇
统计学   1228篇
  2023年   6篇
  2022年   6篇
  2021年   18篇
  2020年   28篇
  2019年   49篇
  2018年   47篇
  2017年   91篇
  2016年   42篇
  2015年   46篇
  2014年   24篇
  2013年   465篇
  2012年   130篇
  2011年   42篇
  2010年   29篇
  2009年   34篇
  2008年   35篇
  2007年   33篇
  2006年   24篇
  2005年   35篇
  2004年   26篇
  2003年   18篇
  2002年   28篇
  2001年   24篇
  2000年   18篇
  1999年   6篇
  1998年   10篇
  1997年   7篇
  1996年   9篇
  1995年   5篇
  1994年   3篇
  1993年   4篇
  1992年   6篇
  1991年   6篇
  1990年   4篇
  1989年   6篇
  1988年   4篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   3篇
  1983年   4篇
  1982年   1篇
  1981年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   3篇
排序方式: 共有1388条查询结果,搜索用时 31 毫秒
1.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   
2.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided.  相似文献   
3.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   
4.
Abstract

This paper focuses on the inference of suitable generally non linear functions in stochastic volatility models. In this context, in order to estimate the variance of the proposed estimators, a moving block bootstrap (MBB) approach is suggested and discussed. Under mild assumptions, we show that the MBB procedure is weakly consistent. Moreover, a methodology to choose the optimal length block in the MBB is proposed. Some examples and simulations on the model are also made to show the performance of the proposed procedure.  相似文献   
5.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
6.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed.  相似文献   
7.
Book Reviews     
Books reviewed:
R.J. Adler, R.E. Feldman & M.S. Taqqu, A Practical Guide to Heavy Tails: Statistical Techniques and Applications.
J.J. Foste, A Beginner's Guide to Data Analysis Using SPSS for Windows.
N. Limnios and G. Oprisan, Semi–Markov Processes and Reliability.  相似文献   
8.
In this paper, the task of determining expected values of sample moments, where the sample members have been selected based on noisy information, is considered. This task is a recurring problem in the theory of evolution strategies. Exact expressions for expected values of sums of products of concomitants of selected order statistics are derived. Then, using Edgeworth and Cornish-Fisher approximations, explicit results that depend on coefficients that can be determined numerically are obtained. While the results are exact only for normal populations, it is shown experimentally that including skewness and kurtosis in the calculations can yield greatly improved results for other distributions.  相似文献   
9.
Sample selection in radiocarbon dating   总被引:1,自引:0,他引:1  
Archaeologists working on the island of O'ahu, Hawai'i, use radiocarbon dating of samples of organic matter found trapped in fish-pond sediments to help them to learn about the chronology of the construction and use of the aquicultural systems created by the Polynesians. At one particular site, Loko Kuwili, 25 organic samples were obtained and funds were available to date an initial nine. However, on calibration to the calendar scale, the radiocarbon determinations provided date estimates that had very large variances. As a result, major issues of chronology remained unresolved and the archaeologists were faced with the prospect of another expensive programme of radiocarbon dating. This paper presents results of research that tackles the problems associated with selecting samples from those which are still available. Building on considerable recent research that utilizes Markov chain Monte Carlo methods to aid archaeologists in their radiocarbon calibration and interpretation, we adopt the standard Bayesian framework of risk functions, which allows us to assess the optimal samples to be sent for dating. Although rather computer intensive, our algorithms are simple to implement within the Bayesian radiocarbon framework that is already in place and produce results that are capable of direct interpretation by the archaeologists. By dating just three more samples from Loko Kuwili the expected variance on the date of greatest interest could be substantially reduced.  相似文献   
10.
我国上市公司独立审计质量的博弈模型刻画及其分析   总被引:2,自引:0,他引:2  
本文分析了我国上市公司独立审计中会计师事务所、公司管理当局、独立董事、监管部门在审计行为中的博弈关系,应用博弈经济理论研究了在信息不对称条件下四方互动博弈策略的选择,并针对多方博弈的影响因素提出了提高独立审计质量的策略,以期对改进我国上市公司独立审计质量有所裨益.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号