全文获取类型
收费全文 | 334篇 |
免费 | 14篇 |
专业分类
管理学 | 40篇 |
民族学 | 2篇 |
人口学 | 29篇 |
丛书文集 | 1篇 |
理论方法论 | 27篇 |
综合类 | 3篇 |
社会学 | 129篇 |
统计学 | 117篇 |
出版年
2023年 | 4篇 |
2022年 | 3篇 |
2021年 | 2篇 |
2020年 | 14篇 |
2019年 | 24篇 |
2018年 | 28篇 |
2017年 | 29篇 |
2016年 | 36篇 |
2015年 | 6篇 |
2014年 | 15篇 |
2013年 | 53篇 |
2012年 | 36篇 |
2011年 | 20篇 |
2010年 | 10篇 |
2009年 | 13篇 |
2008年 | 6篇 |
2007年 | 6篇 |
2006年 | 3篇 |
2005年 | 4篇 |
2004年 | 5篇 |
2003年 | 6篇 |
2002年 | 3篇 |
2001年 | 6篇 |
2000年 | 4篇 |
1999年 | 1篇 |
1997年 | 4篇 |
1996年 | 1篇 |
1995年 | 2篇 |
1993年 | 1篇 |
1987年 | 1篇 |
1979年 | 1篇 |
1973年 | 1篇 |
排序方式: 共有348条查询结果,搜索用时 31 毫秒
341.
Francisco Zorondo-Rodríguez Erik Gómez-Baggethun Kathryn Demps Pere Ariza-Montobbio Claude García Victoria Reyes-García 《Social indicators research》2014,115(1):441-456
Improving quality of life (QoL) is one of the main goals of many public policies. A useful tool to measure QoL needs to get a good balance between indicators guided by theories (top-down approach) and indicators defined by local people (bottom-up approach). However, QoL measurement tools often neglect to include elements that define the standard of living at local level. In this paper, we analyse the correspondence between human development index, as an indicator adopted by governments to assess QoL, and the elements defined by local people as important in their QoL, called here local means. Using a free-listing technique, we collected information from 114 individuals from Kodagu, Kartanataka (India), to capture local means defining QoL. We then compared local means with the indicators used by Human development report (HDR) of Karnataka, the main measurement tool of QoL in Kodagu. The list of local means included access to basic facilities and many issues related to agriculture and natural resources management as elements locally defining QoL. We also found that HDR does not capture the means defined by people as indicators of QoL. Our findings suggest an important gap between current QoL’s indicators considered by public policies and the means of QoL defined by people. Our study provides insights for a set of plausible local indicators useful to achieve a balance between top-down and bottom-up approaches for the local public policies. 相似文献
342.
In this paper, we proposed a new family of distributions namely exponentiated exponential–geometric (E2G) distribution. The E2G distribution is a straightforwardly generalization of the exponential–geometric (EG) distribution proposed by Adamidis and Loukas [A lifetime distribution with decreasing failure rate, Statist. Probab. Lett. 39 (1998), pp. 35–42], which accommodates increasing, decreasing and unimodal hazard functions. It arises on a latent competing risk scenarios, where the lifetime associated with a particular risk is not observable but only the minimum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its survival and hazard functions, moments, rth moment of the ith order statistic, mean residual lifetime and modal value. Maximum-likelihood inference is implemented straightforwardly. From a mis-specification simulation study performed in order to assess the extent of the mis-specification errors when testing the EG distribution against the E2G, and we observed that it is usually possible to discriminate between both distributions even for moderate samples with presence of censoring. The practical importance of the new distribution was demonstrated in three applications where we compare the E2G distribution with several lifetime distributions. 相似文献
343.
In this article, we propose a bivariate long-term distribution based on the Farlie-Gumbel-Morgenstern copula model. The proposed model allows for the presence of censored data and covariates. For inferential purposes, a Bayesian approach via Markov Chain Monte Carlo (MCMC) were considered. Further, some discussions on the model selection criteria are given. In order to examine outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. The newly developed procedures are illustrated on artificial and real data. 相似文献
344.
We consider the issue of performing accurate small-sample testing inference in beta regression models, which are useful for modeling continuous variates that assume values in (0,1), such as rates and proportions. We derive the Bartlett correction to the likelihood ratio test statistic and also consider a bootstrap Bartlett correction. Using Monte Carlo simulations we compare the finite sample performances of the two corrected tests to that of the standard likelihood ratio test and also to its variant that employs Skovgaard's adjustment; the latter is already available in the literature. The numerical evidence favors the corrected tests we propose. We also present an empirical application. 相似文献
345.
Verônica M. C. Lima Tatiene C. Souza Francisco Cribari-Neto Gilênio B. Fernandes 《统计学通讯:模拟与计算》2013,42(1):194-206
The assumption that all errors share the same variance (homoskedasticity) is commonly violated in empirical analyses carried out using the linear regression model. A widely adopted modeling strategy is to perform point estimation by ordinary least squares and then perform testing inference based on these point estimators and heteroskedasticity-consistent standard errors. These tests, however, tend to be size-distorted when the sample size is small and the data contain atypical observations. Furno (1996) suggested performing point estimation using a weighted least squares mechanism in order to attenuate the effect of leverage points on the associated inference. In this article, we follow up on her proposal and define heteroskedasticity-consistent covariance matrix estimators based on residuals obtained using robust estimation methods. We report Monte Carlo simulation results (size and power) on the finite sample performance of different heteroskedasticity-robust tests. Overall, the results favor inference based on HC0 tests constructed using robust residuals. 相似文献
346.
The Log-Normal zero-inflated cure regression model for labor time in an African obstetric population
Hayala Cristina Cavenague de Souza Francisco Louzada Mauro Ribeiro de Oliveira Bukola Fawole Adesina Akintan Lawal Oyeneyin Wilfred Sanni Gleici da Silva Castro Perdon 《Journal of applied statistics》2022,49(9):2416
In obstetrics and gynecology, knowledge about how women''s features are associated with childbirth is important. This leads to establishing guidelines and can help managers to describe the dynamics of pregnant women''s hospital stays. Then, time is a variable of great importance and can be described by survival models. An issue that should be considered in the modeling is the inclusion of women for whom the duration of labor cannot be observed due to fetal death, generating a proportion of times equal to zero. Additionally, another proportion of women''s time may be censored due to some intervention. The aim of this paper was to present the Log-Normal zero-inflated cure regression model and to evaluate likelihood-based parameter estimation by a simulation study. In general, the inference procedures showed a better performance for larger samples and low proportions of zero inflation and cure. To exemplify how this model can be an important tool for investigating the course of the childbirth process, we considered the Better Outcomes in Labor Difficulty project dataset and showed that parity and educational level are associated with the main outcomes. We acknowledge the World Health Organization for granting us permission to use the dataset. 相似文献
347.
Angélica Godínez-Oviedo Fernando Sampedro John P. Bowman Francisco J. Garcés-Vega Montserrat Hernández-Iturriaga 《Risk analysis》2023,43(2):308-323
To prevent and control foodborne diseases, there is a fundamental need to identify the foods that are most likely to cause illness. The goal of this study was to rank 25 commonly consumed food products associated with Salmonella enterica contamination in the Central Region of Mexico. A multicriteria decision analysis (MCDA) framework was developed to obtain an S. enterica risk score for each food product based on four criteria: probability of exposure to S. enterica through domestic food consumption (Se); S. enterica growth potential during home storage (Sg); per capita consumption (Pcc); and food attribution of S. enterica outbreak (So). Risk scores were calculated by the equation Se*W1+Sg*W2+Pcc*W3+So*W4, where each criterion was assigned a normalized value (1–5) and the relative weights (W) were defined by 22 experts’ opinion. Se had the largest effect on the risk score being the criterion with the highest weight (35%; IC95% 20%–60%), followed by So (24%; 5%–50%), Sg (23%; 10%–40%), and Pcc (18%; 10%–35%). The results identified chicken (4.4 ± 0.6), pork (4.2 ± 0.6), and beef (4.2 ± 0.5) as the highest risk foods, followed by seed fruits (3.6 ± 0.5), tropical fruits (3.4 ± 0.4), and dried fruits and nuts (3.4 ± 0.5), while the food products with the lowest risk were yogurt (2.1 ± 0.3), chorizo (2.1 ± 0.4), and cream (2.0 ± 0.3). Approaches with expert-based weighting and equal weighting showed good correlation (R2 = 0.96) and did not show significant differences among the ranking order in the top 20 tier. This study can help risk managers select interventions and develop targeted surveillance programs against S. enterica in high-risk food products. 相似文献
348.
Santamarina Francisco J. Lecy Jesse D. van Holm Eric Joseph 《Voluntas: International Journal of Voluntary and Nonprofit Organizations》2023,34(1):29-38
VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations - National Taxonomy of Exempt Entities (NTEE) codes have become the primary classifier of nonprofit missions since they were... 相似文献