首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   333篇
  免费   14篇
管理学   42篇
民族学   2篇
人口学   30篇
丛书文集   1篇
理论方法论   28篇
综合类   3篇
社会学   129篇
统计学   112篇
  2023年   4篇
  2022年   3篇
  2021年   2篇
  2020年   14篇
  2019年   24篇
  2018年   29篇
  2017年   27篇
  2016年   34篇
  2015年   7篇
  2014年   15篇
  2013年   53篇
  2012年   36篇
  2011年   20篇
  2010年   10篇
  2009年   14篇
  2008年   6篇
  2007年   6篇
  2006年   3篇
  2005年   4篇
  2004年   5篇
  2003年   6篇
  2002年   3篇
  2001年   6篇
  2000年   4篇
  1999年   1篇
  1997年   4篇
  1996年   1篇
  1995年   2篇
  1993年   1篇
  1987年   1篇
  1979年   1篇
  1973年   1篇
排序方式: 共有347条查询结果,搜索用时 15 毫秒
341.
In this paper, we proposed a new family of distributions namely exponentiated exponential–geometric (E2G) distribution. The E2G distribution is a straightforwardly generalization of the exponential–geometric (EG) distribution proposed by Adamidis and Loukas [A lifetime distribution with decreasing failure rate, Statist. Probab. Lett. 39 (1998), pp. 35–42], which accommodates increasing, decreasing and unimodal hazard functions. It arises on a latent competing risk scenarios, where the lifetime associated with a particular risk is not observable but only the minimum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its survival and hazard functions, moments, rth moment of the ith order statistic, mean residual lifetime and modal value. Maximum-likelihood inference is implemented straightforwardly. From a mis-specification simulation study performed in order to assess the extent of the mis-specification errors when testing the EG distribution against the E2G, and we observed that it is usually possible to discriminate between both distributions even for moderate samples with presence of censoring. The practical importance of the new distribution was demonstrated in three applications where we compare the E2G distribution with several lifetime distributions.  相似文献   
342.
We consider the issue of performing accurate small-sample testing inference in beta regression models, which are useful for modeling continuous variates that assume values in (0,1), such as rates and proportions. We derive the Bartlett correction to the likelihood ratio test statistic and also consider a bootstrap Bartlett correction. Using Monte Carlo simulations we compare the finite sample performances of the two corrected tests to that of the standard likelihood ratio test and also to its variant that employs Skovgaard's adjustment; the latter is already available in the literature. The numerical evidence favors the corrected tests we propose. We also present an empirical application.  相似文献   
343.
The assumption that all errors share the same variance (homoskedasticity) is commonly violated in empirical analyses carried out using the linear regression model. A widely adopted modeling strategy is to perform point estimation by ordinary least squares and then perform testing inference based on these point estimators and heteroskedasticity-consistent standard errors. These tests, however, tend to be size-distorted when the sample size is small and the data contain atypical observations. Furno (1996 Furno , M. ( 1996 ). Small sample behavior of a robust heteroskedasticity consistent covariance matrix estimator . Journal of Statistical Computation and Simulation 54 : 115128 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) suggested performing point estimation using a weighted least squares mechanism in order to attenuate the effect of leverage points on the associated inference. In this article, we follow up on her proposal and define heteroskedasticity-consistent covariance matrix estimators based on residuals obtained using robust estimation methods. We report Monte Carlo simulation results (size and power) on the finite sample performance of different heteroskedasticity-robust tests. Overall, the results favor inference based on HC0 tests constructed using robust residuals.  相似文献   
344.
In obstetrics and gynecology, knowledge about how women''s features are associated with childbirth is important. This leads to establishing guidelines and can help managers to describe the dynamics of pregnant women''s hospital stays. Then, time is a variable of great importance and can be described by survival models. An issue that should be considered in the modeling is the inclusion of women for whom the duration of labor cannot be observed due to fetal death, generating a proportion of times equal to zero. Additionally, another proportion of women''s time may be censored due to some intervention. The aim of this paper was to present the Log-Normal zero-inflated cure regression model and to evaluate likelihood-based parameter estimation by a simulation study. In general, the inference procedures showed a better performance for larger samples and low proportions of zero inflation and cure. To exemplify how this model can be an important tool for investigating the course of the childbirth process, we considered the Better Outcomes in Labor Difficulty project dataset and showed that parity and educational level are associated with the main outcomes. We acknowledge the World Health Organization for granting us permission to use the dataset.  相似文献   
345.
To prevent and control foodborne diseases, there is a fundamental need to identify the foods that are most likely to cause illness. The goal of this study was to rank 25 commonly consumed food products associated with Salmonella enterica contamination in the Central Region of Mexico. A multicriteria decision analysis (MCDA) framework was developed to obtain an S. enterica risk score for each food product based on four criteria: probability of exposure to S. enterica through domestic food consumption (Se); S. enterica growth potential during home storage (Sg); per capita consumption (Pcc); and food attribution of S. enterica outbreak (So). Risk scores were calculated by the equation Se*W1+Sg*W2+Pcc*W3+So*W4, where each criterion was assigned a normalized value (1–5) and the relative weights (W) were defined by 22 experts’ opinion. Se had the largest effect on the risk score being the criterion with the highest weight (35%; IC95% 20%–60%), followed by So (24%; 5%–50%), Sg (23%; 10%–40%), and Pcc (18%; 10%–35%). The results identified chicken (4.4 ± 0.6), pork (4.2 ± 0.6), and beef (4.2 ± 0.5) as the highest risk foods, followed by seed fruits (3.6 ± 0.5), tropical fruits (3.4 ± 0.4), and dried fruits and nuts (3.4 ± 0.5), while the food products with the lowest risk were yogurt (2.1 ± 0.3), chorizo (2.1 ± 0.4), and cream (2.0 ± 0.3). Approaches with expert-based weighting and equal weighting showed good correlation (R= 0.96) and did not show significant differences among the ranking order in the top 20 tier. This study can help risk managers select interventions and develop targeted surveillance programs against S. enterica in high-risk food products.  相似文献   
346.
VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations - National Taxonomy of Exempt Entities (NTEE) codes have become the primary classifier of nonprofit missions since they were...  相似文献   
347.
We develop a search‐theoretic model of financial intermediation in an over‐the‐counter market and study how trading frictions affect the distribution of asset holdings and standard measures of liquidity. A distinctive feature of our theory is that it allows for unrestricted asset holdings, so market participants can accommodate trading frictions by adjusting their asset positions. We show that these individual responses of asset demands constitute a fundamental feature of illiquid markets: they are a key determinant of trade volume, bid–ask spreads, and trading delays—the dimensions of market liquidity that search‐based theories seek to explain.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号